Tuesday, June 25, 2013

How fast can we make interpreted Python?

Why is Python slow? A lot of blame lies with the interpreter's ponderous data representation. Whenever you create anything in Python, be it a lowly number, a dictionary, or some user-defined object (i.e. a CatSeekingMissileFactoryFactory), under the hood Python represents these things as a big fat structure.

Why can't an integer just be represented as, well...an integer? Being a dynamic language, Python lacks any external information about the types of the objects it encounters: how is it to know whether one blob of 64 bits represents an integer,  a float, and or a heap allocated object? At the very minimum, a type tag must be tacked onto each value. Furthermore, since the Python interpreter uses reference counting for garbage collection, each object is tasked with remembering how many other objects refer to it. Throw in a few other pressing exigencies and you end with objects in Python that are significantly larger than an equivalent piece of data from a compiled language. To actually get useful work done the Python interpreter has to perform a delicate dance of wasteful indirection: checking tags here, calling some unknown function pointer there and (finally!) pulling data from the heap into registers.

The problem is not that Guido van Rossum doesn't care about performance or that Python is badly written! The internals of the interpreter are thoughtfully designed and optimized to eke out speed gains wherever the opportunity presents itself. However, Python's unholy marriage to its object representation seems to make the significant and enduring performance gap between Python and many other languages inescapable.

Didn't PyPy already solve this problem?

So, why not cast off the yoke of PyObjects? PyPy has already shown that, if you give a JIT compiler the freedom to lay things out however it wants, Python can be made a lot faster. However, all that speed comes at a terrible cost: the loss of compatibility with extension modules that rely on the old Python C API. PyPy is allowed to think of an int as just some bits in a register but your library still expects a big struct, with a type tag and refcount and all. Despite many efforts by the PyPy team to help the rest of us slowpokes transition to their otherwise very impressive system, all the libraries which expect PyObjects are too important to be abandoned (and often too large to be rewritten).

Can we do anything to make Python faster without having to give up libraries like SciPy?

How about a faster interpreter?

Earlier this year, my officemate Russell and I got it into our heads that CPython hadn't reached quite far enough into the bag of interpreter implementation tricks.

  • Why does Python use a stack-based virtual machine when a register-based machine might have lower dispatch overhead?
  • Why does CPython only perform peephole optimizations? Why not use simple dataflow analyses?
  • Attribute lookups are very repetitive, would some sort of runtime hints/feedback be useful for cutting down on hash function evaluations?
  • Why not use bit-tagging to store integers directly inside in PyObject pointers? It's a common technique used in the implementation of other high level languages. Perhaps the Python developers shouldn't have rejected it?
  • Call frames in Python seem a bit bloated and take a long time to set up, can we make function calls cheaper?
To try out these ideas (and a few others), we implemented a new Python interpreter called Falcon. It's not meant as a complete replacement for Python since Falcon falls back on the Python C API to implement constructs it doesn't support natively. However, though that fallback mechanism, Falcon should theoretically be able to run any Python code (albeit, some without any performance gains at all). 

How much faster is Falcon? Unfortunately, not nearly as fast as we hoped. In the best cases, such as when lots of integers are being manipulated in a loop, Falcon might get up to 3X faster than the regular Python interpreter. More often, the gains hover around a meager 20%. Still, we found the project interesting enough to write a paper about it (which we submitted to the Dynamic Languages Symposium). 

If you are interested in trying Falcon, you can get it on github. Let me know how it works for you!

Tuesday, June 4, 2013

Faster image filters in Python with Parakeet

In the early days of computer vision — before Big Data, small data, or even very much data at all — it was popular to manually construct vision algorithms out of neighborhood operations. A neighborhood operation is an image transformation which replaces each pixel with some simple function of the surrounding pixels' values. For example, if you replace each pixel with the median of its neighborhood, you get back an image which looks similar but is somewhat less detailed and (usefully) less noisy. Alternatively, instead of the median value, you can replace each pixel with the minimum or maximum of its neighbors. These two operations turned out to be useful enough to warrant getting their own names. Windowed maximum, which smears bright parts of the image and makes them grow outward, is called dilation. Windowed minimum, which makes images look pitted and creepy, is called erosion.

Dog with low self-regard Dilation Erosion

Using only successive applications of dilation and erosion it is possible to express a wide array of interesting image transformations. The composition of these two operators was considered important enough to warrant becoming a field unto itself called mathematical morphology. Morphological image transformations are still widely used for preprocessing, feature extraction, and if you squint you'll even see their likeness in the max-pooling operation used for subsampling by convolutional neural networks.

Python Implementation

Now that I've hopefully convinced you of their usefulness, let's see how to implement morphological image filters in Python. As a first cut, let's just loop over all the positions (i,j) in an image and at each pixel we'll loop over the surrounding k×k square of pixels to pick out the maximum. Technically, we could use a more complicated neighborhood (i.e. a surrounding circle of pixels or really any shape at all), but the humble square will do for this post.

Simple enough, but how quickly does it run? Anyone who has tried writing numerically intensive code in pure Python knows the answer will probably fall somewhere between "not well" and "would have been faster on pen and paper". Running this algorithm on a 1024×768 image with a 7×7 neighborhood on my MacBook Air takes almost half a minute. Pathetic!

Python programmers have become trained to reach for a compiled library whenever they encounter a significant performance bottleneck. In this case, SciPy comes with a full suite of morphological operators that have been implemented efficiently in C. Performing the same dilation using SciPy takes only 34 milliseconds. That's an approximately 750X performance gap between Python and native code.

Runtime Compilation à la Parakeet

So, Python's bytecode interpreter is hopelessly slow, but is there any alternative to relying on lower-level languages for performance? In my ideal world, all of the code I write would sit comfortably inside a .py file. Toward this end, I've been working on Parakeet, a runtime compiler which uses Python functions as a template for dynamically generated native code. By specializing functions for distinct argument types and then performing high-level optimizations on the typed code, Parakeet is able to run code orders of magnitude faster than Python. It's important to note that Parakeet is not capable of speeding up or even executing arbitrary programs — it's really a runtime compiler for an array oriented subset of Python. Luckily, all of the operations used to implement dilation (array indexing, looping, etc..) fit within this subset.

To get Parakeet to accelerate a Python function you have to:

  • Wrap that function in Parakeet's @jit decorator.
  • Once wrapped, any calls to the function get routed through Parakeet's type specializer, creating different versions of the function for distinct input types.
  • Each typed version of a function is then extensively optimized and turned into native code using LLVM.

If I stick an @jit decorator above dilate_naive, I find that it now runs in only 51 milliseconds. That is, almost 500X faster than Python, though still a lot slower than the SciPy implementation. Not bad for code that at least looks like it's written in Python.

Can We Get Faster Than SciPy?

If you poke around the source of SciPy morphology operators, you'll discover that SciPy doesn't actually use the naive algorithm above. SciPy's sagacious authors took note of the fact that windowed minima/maxima are separable filters, meaning they can be computed more efficiently by first performing 1D transformations on all the rows of the image, followed by 1D transformations on all the columns. Below is a Python implementation of this two-pass dilation which only inspects 2k neighboring pixels per output pixel, unlike the less efficient code above which has to look at k2 neighbors.

When I wrap this version with Parakeet's @jit decorator, it only takes 17 milliseconds. That's 2X faster than the precompiled SciPy version!

Comparison with Numba and PyPy

Out of curiosity, I wanted to see how Parakeet's performance compares with the better known projects that are attempting to accelerate Python: Numba and PyPy.

Numba is a runtime compiler which also aims to speed up numerical code by type specializing functions for distinct argument types. Like Parakeet, Numba uses LLVM for code generation on the back end and aims to speed up algorithms which consume and manipulate NumPy arrays. Numba is more ambitious in that it seeks to support all of Python, whereas Parakeet carves a subset of the language which is amenable to efficient compilation.

Switching from Parakeet to Numba at first seemed simple, merely a matter of replacing @parakeet.jit with @numba.autojit. However, though the dilation benchmark managed to execute successfully, it was slower than CPython! To add insult to injury, Numba's compilation times were an order of magnitude slower than Parakeet. Since I know that Numba's authors are no chumps, I emailed Mark Florisson to figure out how I was misusing their compiler.

It turns out that when Numba is confronted with any construct it doesn't support directly, it switches to a (very) slow code path. From my perspective, it seemed like it had taken a half-day off and gone for a long lunch. Parakeet is very different in this regard: if Parakeet can't execute something efficiently it complains loudly during compilation and gives up.

In this case, it was the min and max builtins which were giving Numba trouble. Once I changed the filter implementation to use an inline if-expression instead — xrange(stop_i if stop_i >= m else m) instead of xrange(min(stop_i, m)) — then Numba ran with compile and execution times uncomfortably close to Parakeet's. I'm feeling the heat.

PyPy differs dramatically from both Parakeet and Numba in that rather than generating native code from within the Python interpreter it is a completely distinct implementation of the Python language. PyPy uses trace compilation to generate native code. Getting numerical code to run in PyPy can be tricky since it currently only supports a rudimentary subset of NumPy via a reimplementation called NumPyPy. The project seems to be making steady progress, but due to the daunting scope of NumPy, there's still a lot of basic functionality missing.

The source for the timings below is in the Parakeet repository. Parakeet and Numba both perform type specialization and compilation upon the first function invocation, so that time is show below separate from the execution time of a second function call. CPython's bytecode compilation is so fast as to be negligible, so I didn't even attempt to time it. Finding out how long PyPy takes to generate code from a hot path would be interesting but I have no idea how to access that kind of information, so I also left PyPy's compile times as "n/a".

Algorithm Compile Time Execution Time
SciPy Separable O(k) n/a 34ms
CPython Naive O(k2) n/a 25,480ms
PyPy Naive O(k2) n/a 657ms
Numba Naive O(k2) 286ms 61ms
Parakeet Naive O(k2) 180ms 51ms
CPython Separable O(k) n/a 7,724ms
PyPy Separable O(k) n/a 429ms
Numba Separable O(k) 407ms 19ms
Parakeet Separable O(k) 238ms 17ms

Parakeet wins on performance over Numba by the thinnest margin. When Mark (the Numba developer) ran the same benchmark on a different computer, he actually saw Numba coming in slightly ahead. Maybe the difference doesn't even rise above the level of statistical noise? Either way, both Numba and Parakeet seem like reasonable choices for generating type-inferred native code from Python functions.

A crucial distinction between the two projects is that Parakeet has been designed as a domain specific language. Parakeet is embedded array-oriented language within Python but is not and never will be itself Python. If you wanted to, for example, create user-defined objects inside of compiled code, with Parakeet you're out of luck. The advantage of this approach is that Parakeet can guarantee that whatever it compiles will have reasonably good performance. Numba, on the other hand, can technically execute arbitrary Python but still has some implicit language boundaries demarcating what will or won't run efficiently.

Wither Parallelism?

A nice property of image filtering algorithms that I have completely ignored in this post is that they are usually perfectly parallelizable. Parakeet started out as a parallelizing compiler and only recently got stripped down to generating single core code. Parallelism is coming back in a cleaner form, so the next post will hopefully be about doing image filtering in Parakeet an order of magnitude (or two) faster.

For now, you can try out Parakeet and it might speed up your code. Or it might fail mysteriously — it's still a work in progress and you've been warned!

Sunday, January 27, 2013

A transformative journey from Python to native code

When somebody offers to compile your Python code, exactly what kind of mischief are you getting yourself into? What diabolical schemes does a just-in-time compiler enact to transmute sluggish Python code into something speedier?

Toward the end of my last post, I mentioned that I'm working on a library called Parakeet which accelerates numerical Python. In this post, I'm going to illuminate mysterious inner workings of a just-in-time compiler by following a function through its various stages of existence within Parakeet.

Caveat: Parakeet isn't finished, and it's awkward to write so extensively about something I don't yet want anyone using. Nonetheless, Parakeet is the compiler I know best and its relatively simple design will hopefully allow me to convey a general sketch of how JITs work.

The function we're going radically rearrange today is count_thresh, which sums up the number of elements in an array less than a given threshold.

This little ditty of looping and summation is simple enough that I hope its relationship to the code we later generate will stay evident. It's not, however, an entirely contrived computation. If you were to throw in an array of class labels and dash more complexity, you would soon have the core of a decision tree learning algorithm.

Compared with the wild menagerie of run-time compilation techniques that have developed over the past decade, Parakeet is a relatively modest function-specializing compiler. If you want Parakeet to compile a particular function, then wrap that function with the @jit decorator. Like this:

The job of @jit is to intercept calls into the wrapped function and then to initiate the following chain of events:

  • Translate the function into an untyped representation, from which we'll later derive multiple type specializations.
  • Specialize the untyped function for any argument types which get passed in.
  • Optimize the merciless heck out of the typed code, and translate abstractions such as tuples and n-dimensional arrays into simple heap-allocated structures with low-level access code.
  • Translate the optimized and lowered code into LLVM, and let someone else worry lower-level optimizations and how to generate architecture-specific native code.
An eager reader may be thinking:
I can just stick that decorator atop any Python function and it will magically run faster? Great! I'll paste @jit all over my code and my Python performance problems will be solved!

Easy with those decorators! Parakeet is not a general-purpose compiler for all of Python. Parakeet only supports a handful of Python's data types: numberstuples, slices, and NumPy arrays.

To manipulate these values, Parakeet lets you use any of the usual math and logic operators, along with some, but not all, of the built-in functions. Functions such as range are compiled to deviate from their usual behavior — in Python their result would be a list but in Parakeet such functions create NumPy arrays.

If your performance bottleneck doesn't fit neatly into Parakeet's restrictive universe then you might benefit from a faster Python implementation, or alternatively you could outsource some of your functionality to native code via Cython.

Let's continue, following count_thresh on its inexorable march toward efficiency.


From Python into Parakeet

When trying to extract an executable representation of a Python function, we face a choice between using its syntax tree:

...or the lower-level bytecode which the Python interpreter actually executes:

Neither is ideal for program analysis and transformation, but since the bytecode is littered with distracting stack manipulation and discards some of the higher-level language constructs, let's start with a syntax tree and quickly slip into something a little more domain specific.


Untyped Representation

When a function is handed over from Python into Parakeet, it is translated into a form that is mostly similar to an ordinary Python syntax tree. There, are however, a few key differences:

  • For-loops must traverse numeric ranges, so for x in xs gets translated into for i in range(len(xs)): x = xs[i]
  • There is a suspicious looking phi expression at the top of the loop. What is that thing? Does it have anything to do with the name of this site?

Looking even closer at the code above, you'll notice the variable n has been split apart into three distinct names: n, n2 and n_loop. What can account for such triplicative witchery?

Calm your agitation, dear reader. You're looking at a variant of Static Single Assignment form. I'll write about SSA in more detail later, but for now the most important things to know about it are:

  • Every distinct assignment to a variable in the original program becomes the creation of distinct variable. Reminiscent of functional programming, no?
  • At a point in the program where control flow could have come from multiple places (such as the top of a loop), we explicitly denote the possible sources of a variable's value using a φ-node.
  • In exchange for all these variable name gymnastics we get a tremendous simplification in the onerous task of writing program analyses. It may not be immediately obvious why, but this post is already too long, so trust me for now.

Another difference from Python is that Parakeet's representation treats many array operations as first-class constructs. For example, in ordinary Python len is a library function, whereas in Parakeet it's actually part of the language syntax and thus can be analyzed with higher-level knowledge of its behavior. This is particular useful for inferring the shapes of intermediate array values.


Type-specialized Representation

When you call an untyped Parakeet function, it gets cloned for each distinct set of input types. The types of the other (non-input) variables are then inferred and the body of the function is rewritten to insert casts wherever necessary.

In the case of count_thresh, observe that the function has been specialized for two inputs of type array1(float64) and float64 and that its return type is known to be int64. Furthermore, the boolean intermediate value produced by checking if an element is less than the threshold is cast to int64 before getting added to n2

If you use a variable in a way that defeats type inference (for example, by treating it sometimes as an array and other times as a scalar), then Parakeet gives up on your code and raises an error.


Optimize mercilessly!

Type specialization already gives us a big performance boost by enabling the use of an unboxed representation for numbers. Adding two floats stored in registers is orders of magnitude faster than calling  __add__ on PyFloatObjects.

However, if all Parakeet did was specialize your code it would still be significantly slower than programming in a lower-level language. The compiler needs to exert more effort to contort and transform array-oriented Python code into the lean mean loops you would expect to get from a C compiler. Parakeet attacks sluggish code with the usual battery of standard optimizations, such as constant propagation, common sub-expression elimination, and loop invariant code motion. Furthermore, to mitigate the abstraction cost of array expressions such as 0.5*vec1 + 0.5*vec2 Parakeet fuses array operators, which then exposes further opportunities for optimization.

In this case, however, the computation is simple enough that only a few optimizations can meaningfully change it. I turned off loop unrolling for this post since it significantly expands the size of the produced code.

In addition to rewriting code for performance gain, Parakeet also "lowers" higher-level constructs such as tuples and arrays into more primitive concepts. Notice that the above code does not directly index into n-dimensional arrays, but rather explicitly computes offsets and indexes directly into an array's data pointer. Lowering complex language constructs simplifies the next stage of program transformation: the escape from Parakeet into LLVM.


LLVM

LLVM is a delightfully well-engineered compiler toolkit which that comes with its a powerful arsenal of optimizations and generates native code for a variety of platforms. To get LLVM to finish the job of compiling count_thresh, we need to translate into assembly language. Once the Parakeet representation has been typed, optimized, and stripped clean of abstractions, the translation to LLVM turns out to be surprisingly easy. Sure, there's some plumbing work to map between Parakeet's types and LLVM's type system, but that's probably the most straightforward part of this whole pipeline.


Generated Assembly

Once we pass the torch to LLVM, Parakeet's job is mostly done. LLVM chisels the code we've given it with its bevy of optimizations passes. Once every last inefficiency has been ferreted out and exterminated, LLVM uses a platform-specific back-end to translate from its assembly language into native instructions. And thus, at last, we arrive at native code:

Reading x86-64 assembly is tedious, so I won't expect you to make sense of this code dump. But do notice that we end up with the same number of machine instructions as we originally had Python bytecodes. It's safe to suspect that the performance might have somewhat improved.

How much faster is it?

In addition to benchmarking against the Python interpreter (an unfair comparison with a predictable outcome), let's also see Parakeet stacks up against an equivalent function implemented using NumPy primitives:

I gave the the NumPy, Python, and Parakeet versions of count_thresh 1 million randomly generated inputs and averaged the time they took to complete over 5 runs.

Python NumPy Parakeet
3.7205 0.0036 0.0025
Execution time in seconds

Not bad — Parakeet is about about 1500 times faster than vanilla Python and even manages to edge out NumPy by a safe margin. Still, that NumPy code is tantalizingly more compact than the explicit loop we've been working with throughout this post.

Can Parakeet compile something that looks more like the NumPy version of count_thresh? In fact yes, you can (and are encouraged to) write code in a high-level array-oriented style with Parakeet. However, an explanation of how such code gets compiled (and parallelized) will have to wait until I discuss Parakeet's data parallel operators in the next post.

Thursday, January 24, 2013

Just-in-time compilers for number crunching in Python

Python is an extremely popular language for number crunching and data analysis. For someone who isn't familiar with the scientific Python ecosystem this might be surprising, since Python is actually orders of magnitude slower at simple numerical operations than most lower level languages. If you need to do some repetitive arithmetic on a large collection of numbers, then ideally those numbers would be stored contiguously in memory, loaded into registers in small groups, and acted upon by a small set of machine instructions. The Python "interpreter" (actually a stack-based virtual machine), however, uses a very bulky object representation. Furthermore, Python's dynamicism introduces a lot of indirection around simple operations like getting the value of a field or multiplying two numbers. Every time you run some innocuous looking code such as x[i] = math.sqrt(y[i] * z.imag) a shocking host of dictionary look-ups, allocations, and all-around wasteful computations kick into gear.

The trick, then, to getting good numerical performance from Python is to avoid really doing your work in Python. Instead, you should use Python's remarkable capacity as a glue language to coordinate calls between highly optimized lower-level numerical libraries. This is why a certain handsome astrophysicist called Python "the engine of modern science". NumPy plays an extremely important role in enabling this sticky style of programming by providing a high-level Pythonic interface to an unboxed array that can be passed easily into precompiled C and Fortran libraries.

In order to benefit from NumPy and its vast ecosystem, your algorithm must spend most of its time performing some common operation for which someone has already written an efficient library. Want to multiply some matrices? Great news, calling BLAS from Python isn't really much slower than doing it from C. Need to perform a large convolution? No problem, just hop on over to the frequency domain with a call to the always zippy FFTW.

But disaster and tribulation: What if no one has yet written a library that does the heavy lifting that I need? The standard solutions all boil down to "implement the bottleneck in C" (or if you're feeling enlightened, Cython).

Is a different way possible? Must we sacrifice all our abstractions to get performance? Even if we give up all the niceties of Python, we'll still probably churn out some fairly naive native code that woefully underutilizes our computers' capabilities. Think of all those pitifully empty vector registers, despondently idle extra cores, and a swarm of GPU shader units which haven't seen a general purpose computation in weeks. Harnessing all that parallelism from a low-level language requires, for most tasks, a heroic effort.

A worthy challenge is then issued in two parts:

  • Find a way to accelerate a meaningfully expressive subset of Python, such that it's possible to still use convenient abstractions without a large runtime cost. This generally implies a just-in-time compiler of some sort (though a few notable exceptions do compile Python statically). 
  • As long as we're dynamically translating high level abstractions into low-level executables, is there any chance that the "high level"-ness could be useful for parallelization?  It sure would be nice to use those other cores...

To be clear, I am not talking about speeding up all of Python, though some very smart and praiseworthy folks have been working on that for a while. Rather, the great thing about many numerically intensive algorithms is that they are remarkably simple. You might get away with using some subset of Python for implementing the core of your computation, and still feel like you are coding at a high level of abstraction (so long as the boundary between the numerical language subset and the rest of Python is mostly seamless).

A surprisingly large number of projects have already risen to meet this challenge. They (roughly) fall onto a spectrum which trades off between the freedom of the compiler to dramatically rewrite/optimize your code and the expressiveness of the sub-language that is exposed to the programmer.

  • NumPyPy - an attempt to reimplement all of NumPy in Python and then let PyPy do its meta-tracing magic. It seems to me that the all-or-nothing nature of PyPy's uncooperativeness with existing NumPy libraries makes this a Utopian misadventure in code duplication, requiring the reimplementation of a huge scientific computing code base with faint hope that a largely opaque general-purpose JIT can play the role of an optimizing compiler for scientific programs. Hopefully fijal will prove us detractors wrong.
  • Numba - one the several cool projects Travis Oliphant has been cooking up since he started Continuum Analytics. For the most part, Numba's main purpose is to unbox numeric values and make looping fast in Python. It's still a work in progress and seems to be going in multiple directions at once. They're adding support for general-purpose Python constructs, but relying on the traditional Python runtime to implement anything non-numeric, which sequentializes their runtime due to the Global Interpreter Lock. To enable parallelism you can disavow using any constructs that rely on things Numba doesn't compile directly...but that requires that you know what those constructs are. Like I said, it's still evolving. The commercial version of Numba even touts some capacity for targeting GPUs, but I haven't used it and don't know what can actually get parallelized.    
  • Blaze - another Travis Oliphant creation, though this one is even more ambitious than Numba. Whereas NumPy is a good abstraction for dense in-memory arrays with varying layouts, Blaze is intended to work with more complex data types and "is designed to handle out-of-core computations on large datasets that exceed the system memory capacity, as well as on distributed and streaming data". Travis is billing Blaze as the successor to NumPy. The underlying abstractions are to a large degree inspired by the Haskell library Repa 3, which is very cool and worth reading about. One key difference between Blaze and NumPy (aside from the much richer array type) is that Blaze delays array computations and then compiles them on-demand. I get the sense that Blaze is pretty far off from being ready for the masses, but I'm sure it will be Awesome Upon Arrival.  
  • Copperhead - Copperhead takes the direct route to parallelism by forcing you to write your code using data parallel operators which have clear compilation schemes onto multicore and GPU targets. To further simplify the compiler's job, Copperhead forces your code to be purely functional, which goes far against the grain of idiomatic Python. In exchange for these semantics handcuffs, you get some pretty speedy parallel programs. Unfortuantely, the author Bryan Catanzaro has disappeared from github, so I'm not sure if Copperhead is still being developed.  
  • Theano - Theano is both more cumbersome and more honest than projects like Numba or Copperhead, which take code that looks like Python but then execute it under different assumptions/semantics. With Theano, on the other hand, you have to explicitly piece together symbolic expressions representing your computation. You're always aware that you're constructing Theano syntax explicitly. In exchange for your effort though, Theano can work small feats of magic. For example, Theano can group and reorganize matrix multiplications, reorder floating point operations for stability, and compute gradients using automatic differentiation. Their backend has some preliminary support for CUDA and should eventually add in multi-core and SIMD code generation. 

Foot-in-mouth edit: I put PyPy all the way on the hopelessly sequential left of the diagram, just as they announced a new position to parallelize and vectorize their JIT. Also, fijal justifiably took offense at my description of NumPyPy. I was in saying that NumPyPy is a whole-ecosystem rewrite, they're only going to rewrite the core and are still figuring out the right way to interact with native libraries.


To add another compiler-critter into the fray, I've written Parakeet, a just-in-time compiler for numerical Python which specializes functions for given input types. Parakeet makes extensive use of the data parallel operators such as map, reduce, and (prefix) scan. It's not essential to use these operators when programming with Parakeet, but they do enable parallelism and more aggressive optimizations. Luckily, it's quite easy to end up using these operators by accident, since our library functions are implemented on top of them.

(edited to present Parakeet less sheepishly)
On the spectrum described above, Parakeet sits somewhere between Numba and Copperhead. Like Copperhead, Parakeet's subset of Python is limited to using a small set of data types, library functions and data parallel operators. On the other hand, unlike Copperhead, you don't have to program in a purely function style: if you write loop-heavy numerical code you'll miss out on parallelization but will still see good single-core performance. The main difference from Numba is the absence of any sort of "object layer" which uses the Python C API. Parakeet will (in the long run) support a smaller more numerically-focused subset of Python for the purpose of giving the programmer a clear sense of what will run fast (and if a feature is slow, then Parakeet simply doesn't support it). Additionally, Parakeet's implementation of Python and NumPy library functions leans heavily on data parallel operators, which gives me hope for making pervasive use of GPUs and multi-core hardware.

If you want to learn more about Parakeet check out some of the following presentations:

  • HotPar 2012: We submitted a paper describing an old version of our compiler (written in OCaml with a fragile GPU backend).
  • SciPy 2013 Lightning Talk: A 5-minute overview of the rewritten Parakeet with an LLVM backend.
  • PyData Boston 2013: A longer presentation with more extensive comparison to Numba.
If you want to try using Parakeet, you can either install it via pip (pip install parakeet) or just clone the github repo. Let me know how it goes!