Q & A with Vint Cerf, Google’s Chief Evangelist (VNUNet.com)

(via VNUNet.com)

Vinton Gray Cerf is an computer scientist who is commonly referred to as one of the “founding fathers of the Internet” for his key technical and managerial role in the creation of the Internet and the TCP/IP protocols which it uses. On September 8, 2005 Google announced that it hired Cerf as “Vice President and Chief Internet Evangelist.” (source: Wikipedia)

Q Why has Google acquired social networking site YouTube?

A The internet is shifting video production from the entertainment industry to consumers, but the quality varies greatly. There are more editing tools now, so that will improve. YouTube is a medium with a substantial clientele. Advertising revenue will work well in this medium and that will grow our footprint, but it was also a defensive move to keep competitors at bay.

Q Which computing developments in the past 50 years do you regard as the most groundbreaking?

A We have gone from simple switching in the phone system to tubes to transistors to integrated circuits and that has had a profound effect. It has produced powerful, small devices using very little power. Some people think Moore’s Law has been outgrown, but we keep finding yet more ways to tweak CMOS chip technology to make it run faster with less power, and its potential is not exhausted yet. We will eventually run out of capacity for that technology, of course. The other dramatic change is in widespread high-capacity networks.

Today, we have computers in our pockets, embedded in cars, in the house and so on. The internet has 400 million machines connected to it – not including laptops, PDAs, and the 2.5 billion internet-enabled mobile phones. So we have two billion devices on the internet, but only one billion users. If we extrapolate that, we will have billions of devices interacting with an unpredictable effect on the network.

Q What recent developments have impressed you most?

A Other than using radio as an alternative to hard wiring I think it has been very high-speed broadband available at consumer prices. With regard to computers, one of the most interesting developments is dual-core processors, which allow one processor to watch another in a single computer. Also, we now have a heightened understanding of the threat of bugs in the digital online environment. We cannot write bug-less code or predict the number of bugs in a certain number of lines of code and test for them. This is a high-risk area. We have even taken language from biology to analogise threats – worms and viruses – and mythology – Trojans. But we have not done a great job of responding to these vulnerabilities.

Q Looking back on your career, is there anything you would do differently?

A With the design of the internet, there are several things I would change if I could. The 32-bit design was insufficient. We are now addressing this with the 128-bit address system, which will last until I’m dead, then it’s somebody else’s problem. Authenticity as a principle would have been nice from the outset of the design. In 1978, public key cryptography was not generally available, but would have helped. As industry and business took to the internet so quickly, clearer notions for virtual private networks would have helped. And with mobile internet being so popular we could have done a better job of enabling mobile access.

Q What are the future problems for the internet?

A Imagine it is 3000 AD and you come across a PowerPoint 97 file. Does Windows 3000 know how to interpret it? This is not having a go at Microsoft because a lot of software support is retired. But what can we do with unsupported formats? How do we get permission to run software on the internet? What if we need a different operating system to run it? We have an intellectual property challenge to preserve interpretability and we have the problem of the digital storage medium.

Q Does the web still need evangelists? How do you see your role?

A Very much so. In the beginning I asked for the title Archduke, but someone pointed out that the last one was assassinated, so perhaps it’s just as well that it didn’t fit with Google’s approach. But we need internet evangelism. Even though there are a billion web users, there are still many billions that are not online.

Q Is the development of the semantic web the next big thing for the internet?

A This is a conundrum. Google could do a better job of presenting relevant search results with them, but where will the tags come from? What vocabularies shall we use? Who supplies the tags and will it be done manually, which has a scaling problem, or automatically? Hopefully the latter.

Today HTML and XHTML is usually generated automatically so we can imagine the same with semantic tags. But we don’t have them now. Meta-tagging has been abused by web publishers to draw people to sites under false pretences, meaning some search engines ignore meta-tags. This is about online authenticity – digital signatures would be helpful.

What it means: always interesting to read the thoughts of one of the most brilliant computer scientists alive.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s