There have been a few articles lately posing the age old question: “Is R or Python a better language to learn for a budding young data scientist?
The consensus answer appears to be “It depends”, but in reality there’s no need to choose between R and Python, because you can have both.
It’s called RPy2: http://rpy.sourceforge.net/rpy2/doc-2.1/html/introduction.html
 

What is “Data Science”?

Before talking about how RPy2 enables “data science”, I’m going to point out that “data science” is a bit of an odd term. All science is “data science”. “Non-data science” is a completely different field: philosophy. “Data science” is just science, which is the discipline of publicly testing ideas by systematic observation, controlled experiment, and Bayesian inference.
The goal of “data science” is to draw statistically valid inferences from the data. The tag “data” is meant to suggest that it doesn’t really matter what data are being used, but this is false: it is difficult-to-impossible to do science without getting up-close-and-personal with the data, to understand the foibles of the systems that produced it, and to deal intelligently and sensitively with the non-idealities that come along with the good stuff.
Any interesting dataset has at least some of the following: missing values, outliers, and noise. Missing values are exactly what the name implies. Outliers are weird events that for some reason or other are wildly far outside the envelope of reasonableness. Noise is the distribution that results from the sea of random (or non-random) influences on the measured values. Outliers and noise differ in that noise generally has a well-measured distribution from fairly well-understood causes, while outliers are typically the result of poorly understood processes that happen rarely enough that we can’t get a good measure of the distribution.
For dealing with these kinds of things R, Python and RPy are all useful tools.
 

Why R is Useful for Data Scientists

R is a delightful little language in the hands of an experienced statistical analyst. It was written by and for statisticians, and makes some of the most basic data management tasks very easy. In particular, the three basic tasks:

  • Labeling data
  • Filling in missing values
  • Filtering

are all very well-supported by R. Labeling is probably the most important of these. R’s concept of a “data frame”, that carries along dimension and entity labels as column and row headers while letting algorithms work on the purely numerical data inside, is a surprisingly big deal. Traditional numerical programming languages like Python typically relegate the kind of book-keeping that data frames do automatically to the programmer. They end up taking a lot of work and are very easy to get wrong.Dealing with missing values and filtering outliers–or discarding entities that have too many outliers or missing values–are also two very important basic functions in any data processing task. There are also those cases where something that should be strictly positive (mass values, say) turn out to be negative now and then due to measurement error. How you deal with these things can have a big effect on the outcome of your analysis.
R has a wealth of algorithms for dealing with these sorts of situations that embody the distilled wisdom of centuries of scientific practice, although it still requires a measure of taste and good judgement on the part of the analyst to choose the ones best suited to the data they are dealing with.
 

RPy2: Bridging the R-Python Gap

Pandas, the Python data library, has many of the same features these days, but RPy2 creates a nice migration path from R to Python, and lets you learn a lot about R as an incidental adjunct to learning Python. Moving in the other direction, for a lot of experimental development an experienced analyst can use R, then when they are happy with the results and want to incorporate the algorithm into a Python application for distribution to users they can use RPy2.
The ability to perform this migration while never leaving the conceptual model of R is very valuable, but on the other side of the fence, the ability to use a truly general purpose programming language like Python to wrap that conceptual model in a user-friendly application that has a variety of complex additional features (printing, networking, USB support, etc) is vital.For example, I’ve used this approach to create Python applications that read some sensor data, process it via RPy2, and then display it to the user in a variety of ways. I have no clue how I’d read sensor data from R, although there’s probably a way to do it. With Python, there was already a module for doing what I needed, and if there hadn’t been it would have been easy to write one as an extension.
So if you don’t already know R, my recommendation is to learn Python and use RPy2 to access R’s functionality. That way you’ll be learning one language but gaining the power of two. Once you’ve learned RPy the jump to pure R isn’t a big one, whereas starting from the other end the migration path isn’t quite so easy.