Common misconceptions about computer science

Computers are a pretty new thing. There is a lot of discussion around when the first computer was first invented, because it all depends on how we define computers.

“Computer” was first recorded as being used in 1613 to describe any individual who did math or calculations. “Computer” later came to describe the Difference Engine, the first mechanical computer, developed by Charles Babbage in 1822.

Columbia University declares the IBM (International Business Machines) 610 as the first personal computer because it was the first programmable computer intended for use by one person. The IBM 610 was announced in 1957, and because it cost $55,000 ($457,118 after inflation), only 180 units were produced.

Regardless of which exact computer was the first, computers did not begin to change the life of average people until the late 1980s through the 1990s.

According the 2011 U.S. census, home computer use began in the early 1980s and has been growing steadily since, with only 8.2 percent of households reporting having a personal computer in 1984, 61.8 percent in 2003, and 75.6 percent in 2011. Internet access has progressed similarly, with only 18.0 percent of households reporting access in 1997, 54.7 percent in 2003, and 71.7 percent in 2011.

We now exist in a time where a majority of people living around us own and use computers. The importance of the study of computer science is at an all-time high and will only continue to rise as technologies continue to develop.

However, while understanding how to use various technologies like Microsoft Office, smart phones, Google applications, and so forth is extremely important in the current day and age, but is not what the heart of computer science is.

Computer science is a craft of solving problems. As a computer scientist, you train to figure out the most efficient ways to perform various tasks, analyze root causes of problems that arise, uproot those problems, and create something tangible.

An article posted in the Huffington Post last August, titled “Six Reasons Why Studying Computer Science Is Worth It,” lists reason two as “You will feel like God,” citing the divine sensation of creating something that will last forever. It’s truly remarkable to think that unlike most things in the world, there is no decay associated with electronic information.

The physical machinery merely serve as vessels to the actual information, allowing the creation of new machines to perpetuate the old information.

Solving problems as a computer scientist is not strictly contained to the world of computers. When I tell people I’m a computer science major, they assume I can read lines of 0s and 1s as if they were English pros. This is not at all what computer science is about.

My two favorite examples detailing real-world analogies of famous computer science concepts are Binary Search and Recursion. Both have scary, foreign names, but the underlying ideas are quite simple.

Binary Search is a very fast way to find something from a sorted list. Imagine you have a dictionary and you want to tell a computer how to find a particular word, say “rupture.”

Unfortunately, it turns out that you can’t simply tell it to “turn to the r section,” and telling it to search every entry from the beginning until it finds “rupture” could take a really long time if you had a big dictionary.

Instead, we think about how we answer when someone says “guess my number between 1 and 100.” We simply guess 50, and if they say larger, we guess 75, smaller and we guess 25. We keep cutting the possible ranges in half until we find the right number.

That’s exactly what binary search is. So with the dictionary, we just flip the dictionary halfway open, check to see if the entry is larger or smaller than ours, then cut the dictionary in half accordingly.

To find an entry in a dictionary with a 1,000,000 entries using the method of starting from the beginning and searching until we find the right one would take 500,000 tries on average, but using binary search only takes about 20.

Recursion is a way of solving problems.

Imagine you’re standing in a huge line to get into an amusement park and you want to figure out how many people are in line. You could try counting it all yourself, but that would be really pretty hard to do if the line were comprised of 100,000 people.

Instead, recursion says that to find out what position you’re in, you just ask the person in front of you, “What position are you in?” Presumably that person doesn’t know either, so he asks the person in front him, who asks the person in front of her, and so forth. This goes until it gets to the front of the line, where the person there declares that he is in position 1, then the person behind him knows that she is in position 2 and so forth, all the way back to where you’re standing.

Neither of those two problems were particularly math-intensive, which brings up a common misconception surrounding computer science: that it’s math intensive. Math is important when understanding some of the underlying structures at play, but much like music composition, the math exists, but is often not seen or used at the surface level.

Mathematicians can compose Fourier series to detail harmonic values and the overtone series, analyzing exactly what will sound sweet or dissonant. But composers compose with their ears. They listen for the sounds they like, and they produce them.

Computer scientists compose with their perceptions of annoyance in the world. We listen for the things we don’t like, and we fix them. So pick up a text editor and start composing. Fix your (hello) world.

Advertisement

One thought on “Common misconceptions about computer science

  1. Pingback: What is Computer Science? | Unedited Justice

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s