Category: News

We are very excited to announce an early release of PyAutoDiff, a library that allows automatic differentiation in NumPy, among other useful features. A quickstart guide is available here.

Autodiff can compute gradients (or derivatives) with a simple decorator:

More broadly, autodiff leverages Theano's powerful symbolic engine to compile NumPy functions, allowing features like mathematical optimization, GPU acceleration, and of course automatic differentiation. Autodiff is compatible with any NumPy operation that has a Theano equivalent and fully supports multidimensional arrays. It also gracefully handles many Python constructs (though users should be very careful with control flow tools like if/else and loops!).

In addition to the  @gradient decorator, users can apply  @function to compile functions without altering their return values. Compiled functions can automatically take advantage of Theano's optimizations and available GPUs, though users should note that GPU computations are only supported for float32 dtypes. Other decorators, classes, and high-level functions are available; see the docs for more information.

It is also possible for autodiff to trace NumPy objects through multiple functions. It can then compile symbolic representations of all of the traced operations (or their gradients) -- even with respect to objects that were purely local to the function(s) scope.

One of the original motivations for autodiff was working with SVMs that were defined purely in NumPy. The following example (also available at autodiff/examples/svm.py) fits an SVM to random data, using autodiff to compute parameter gradients for SciPy's L-BFGS-B solver:

Some members of the scientific community will recall that James Bergstra began the PyAutoDiff project a year ago in an attempt to unify NumPy's imperative style with Theano's functional syntax. James successfully demonstrated the project's utility, and this version builds out and on top of that foundation. Standing on the shoulders of giants, indeed!

Please note that autodiff remains under active development and features may change. The library has been performing well in internal testing, but we're sure that users will find new and interesting ways to break it. Please file any bugs you may find!

 

I'm very pleased to announce LDC's new website, and in particular the Axon blog.

As we transition the company from research to product, this is where we'll share interesting things we've discovered and lessons we've learned, and of course any other data news that catches our eye.

With great thanks to our many supporters,
Jeremiah

According to the NYT, Disney is implementing a new data-driven system in its parks:

...Disney has decided that MyMagic+ is essential. The company must aggressively weave new technology into its parks — without damaging the sense of nostalgia on which the experience depends — or risk becoming irrelevant to future generations, Mr. Staggs said. From a business perspective, he added, MyMagic+ could be “transformational.”

At first, the system will be used to track customer information (name, birthday, preferred rides, etc.). Future versions should allow the company to know where in the park each customer is and whether or not a nearby attraction would interest them. Beyond that, the company could predict a customer's likely trajectory around the park -- or even subtly adjust that path to minimize time on line and maximize time spent on the attractions.

In one of the best technology articles of 2012, Steve Lohr writes about the need for thought and intuition when dealing with data. He observes:

In so many Big Data applications, a math model attaches a crisp number to human behavior, interests and preferences. The peril of that approach, as in finance, was the subject of a recent book by Emanuel Derman, a former quant at Goldman Sachs and now a professor at Columbia University. Its title is “Models. Behaving. Badly.”

Claudia Perlich, chief scientist at Media6Degrees, an online ad-targeting start-up in New York, puts the problem this way: “You can fool yourself with data like you can’t with anything else. I fear a Big Data bubble.”

And concludes:

Listening to the data is important, [data scientists] say, but so is experience and intuition. After all, what is intuition at its best but large amounts of data of all kinds filtered through a human brain rather than a math model?

At the M.I.T. conference, Ms. Schutt was asked what makes a good data scientist. Obviously, she replied, the requirements include computer science and math skills, but you also want someone who has a deep, wide-ranging curiosity, is innovative and is guided by experience as well as data.

The NYT writes about recent advances in deep or hierarchical machine intelligence systems:

Advances in pattern recognition hold implications not just for drug development but for an array of applications, including marketing and law enforcement. With greater accuracy, for example, marketers can comb large databases of consumer behavior to get more precise information on buying habits. And improvements in facial recognition are likely to make surveillance technology cheaper and more commonplace.

The NYT writes about Google's experimental web-scale autoencoder neural network. The system searched youtube videos for recognizable images in a completely unsupervised manner -- proving conclusively that the internet is indeed full of cats:

The research is representative of a new generation of computer science that is exploiting the falling cost of computing and the availability of huge clusters of computers in giant data centers. It is leading to significant advances in areas as diverse as machine vision and perception, speech recognition and language translation.

Although some of the computer science ideas that the researchers are using are not new, the sheer scale of the software simulations is leading to learning systems that were not previously possible. And Google researchers are not alone in exploiting the techniques, which are referred to as “deep learning” models. Last year Microsoft scientists presented research showing that the techniques could be applied equally well to build computer systems to understand human speech.