What is Computer Science?

When I’m asked what I study, I often say “computers”. When I’m asked where I work, I say “at some computer place”. But those description are far from accurate.

Inspired by a recent blog post from a friend and colleague about why he’s passionate about math (see What is Math?), I decided to write a post about what I do and what Computer Science (CS) is to me.

I actually wrote a similar post about CS last fall. While many of my views have since evolved, many haven’t, and it details some cool real-world analogies for binary search and recursion, two popular CS topics (see Common misconceptions about CS).

Full disclaimer, my experience in CS is by no means extensive, so I will try my best to not extrapolate unduly.

During the school year I’m an undergraduate student and teaching assistant in CS at Brandeis University, and currently I’m a software-engineering intern at a big data analytics company called HP Vertica.

I guess I should start with the bits that confuse people the most.

Out of the five computer science classes I’ve had so far, only one allowed me to use a laptop, tablet, or any sort of electronic device in class. At my job, I spend just as much time writing documentation or drawing out figures on a whiteboard as I do writing code.

A prominent computer scientist and winner of the 1972 Turing Award (the Noble Prize of Computer Science), Edsger W. Dijkstra, is famous for saying “Computer Science is no more about computers than astronomy is about telescopes”.

But how can that be? The study of computers isn’t about computers?

Well, it turns out that CS is not the study of computers. It is instead the study of computation–the study of modeling, interacting with, and abstracting over information. The computer turns out to be a great tool for this.

I know as little about why your iphone’s screen looks kind of yellow as you do. I do not know how to unlock a locked Macbook without a password. And I certainly do not know how to “hack” into a “mainframe”, or whatever.

That being said, I can give you some examples of the kind of work I have done.

For a personal project in high school, I wrote a program that analyzed any given text file (like Moby Dick, the Bible, etc.) and generated psuedo-random text that mimicked the style of the source text. The goal (and result) was that the generated text looked like it was excerpted from the source text.

At my current job, I’ve optimized various SQL queries to run faster–in one extreme case, a 100x performance boost was measured.

The “real-world application” is that million-customer companies can ask “what’s the average distance that my customers live from the store that is nearest to them” and see results in seconds or minutes, rather than in hours or days.

My high school project took about 4 hours of planning out and about 2,000 lines of code. The project at my job took over 100 hours of planning out and only about 750 lines of code.

“Planning out” at my job included background research, inventing the optimizations, proving their validity and effectiveness, documenting them, designing my implementations for them, putting them through review of a panel of engineers, filing patent paperwork, etc.

The point is that my time is not spent fiddling with phones, tablets, or computers. Instead, it’s spent using the computer as a tool to do some pretty cool things with information.

But since I find it pretentious to say I study “modeling, interacting with, and abstracting over information”, if you ask me what I study, I’ll probably still say “computers”.

Advertisement

Common misconceptions about computer science

Computers are a pretty new thing. There is a lot of discussion around when the first computer was first invented, because it all depends on how we define computers.

“Computer” was first recorded as being used in 1613 to describe any individual who did math or calculations. “Computer” later came to describe the Difference Engine, the first mechanical computer, developed by Charles Babbage in 1822.

Columbia University declares the IBM (International Business Machines) 610 as the first personal computer because it was the first programmable computer intended for use by one person. The IBM 610 was announced in 1957, and because it cost $55,000 ($457,118 after inflation), only 180 units were produced.

Regardless of which exact computer was the first, computers did not begin to change the life of average people until the late 1980s through the 1990s.

According the 2011 U.S. census, home computer use began in the early 1980s and has been growing steadily since, with only 8.2 percent of households reporting having a personal computer in 1984, 61.8 percent in 2003, and 75.6 percent in 2011. Internet access has progressed similarly, with only 18.0 percent of households reporting access in 1997, 54.7 percent in 2003, and 71.7 percent in 2011.

We now exist in a time where a majority of people living around us own and use computers. The importance of the study of computer science is at an all-time high and will only continue to rise as technologies continue to develop.

However, while understanding how to use various technologies like Microsoft Office, smart phones, Google applications, and so forth is extremely important in the current day and age, but is not what the heart of computer science is.

Computer science is a craft of solving problems. As a computer scientist, you train to figure out the most efficient ways to perform various tasks, analyze root causes of problems that arise, uproot those problems, and create something tangible.

An article posted in the Huffington Post last August, titled “Six Reasons Why Studying Computer Science Is Worth It,” lists reason two as “You will feel like God,” citing the divine sensation of creating something that will last forever. It’s truly remarkable to think that unlike most things in the world, there is no decay associated with electronic information.

The physical machinery merely serve as vessels to the actual information, allowing the creation of new machines to perpetuate the old information.

Solving problems as a computer scientist is not strictly contained to the world of computers. When I tell people I’m a computer science major, they assume I can read lines of 0s and 1s as if they were English pros. This is not at all what computer science is about.

My two favorite examples detailing real-world analogies of famous computer science concepts are Binary Search and Recursion. Both have scary, foreign names, but the underlying ideas are quite simple.

Binary Search is a very fast way to find something from a sorted list. Imagine you have a dictionary and you want to tell a computer how to find a particular word, say “rupture.”

Unfortunately, it turns out that you can’t simply tell it to “turn to the r section,” and telling it to search every entry from the beginning until it finds “rupture” could take a really long time if you had a big dictionary.

Instead, we think about how we answer when someone says “guess my number between 1 and 100.” We simply guess 50, and if they say larger, we guess 75, smaller and we guess 25. We keep cutting the possible ranges in half until we find the right number.

That’s exactly what binary search is. So with the dictionary, we just flip the dictionary halfway open, check to see if the entry is larger or smaller than ours, then cut the dictionary in half accordingly.

To find an entry in a dictionary with a 1,000,000 entries using the method of starting from the beginning and searching until we find the right one would take 500,000 tries on average, but using binary search only takes about 20.

Recursion is a way of solving problems.

Imagine you’re standing in a huge line to get into an amusement park and you want to figure out how many people are in line. You could try counting it all yourself, but that would be really pretty hard to do if the line were comprised of 100,000 people.

Instead, recursion says that to find out what position you’re in, you just ask the person in front of you, “What position are you in?” Presumably that person doesn’t know either, so he asks the person in front him, who asks the person in front of her, and so forth. This goes until it gets to the front of the line, where the person there declares that he is in position 1, then the person behind him knows that she is in position 2 and so forth, all the way back to where you’re standing.

Neither of those two problems were particularly math-intensive, which brings up a common misconception surrounding computer science: that it’s math intensive. Math is important when understanding some of the underlying structures at play, but much like music composition, the math exists, but is often not seen or used at the surface level.

Mathematicians can compose Fourier series to detail harmonic values and the overtone series, analyzing exactly what will sound sweet or dissonant. But composers compose with their ears. They listen for the sounds they like, and they produce them.

Computer scientists compose with their perceptions of annoyance in the world. We listen for the things we don’t like, and we fix them. So pick up a text editor and start composing. Fix your (hello) world.