I’ve recently been through some student entries about the history of the Internet on LinkedIn. I’ve published a post in regards to the same topic on Storify. Here is what I’ve gathered from my readings.
I’ve decided to discuss the two videos at the same time as they tend to overlap. Anyway, it has been a while since Web 1.0 first emerged on the market and when it did, it mainly gave out information. Web 1.0, or rather the person behind it, had full control of what to publish and what not to publish and as a result of this, people only knew as much, or as little, as the Internet had to offer. In other words, it was authoritative, and so people surfing the net on Web 1.0 were greatly restricted in terms of content as the material published on the Internet were inadequate and often times completely irrelevant. Not forgetting the fact that Web 1.0 is passive and was not at all interactive with its users. But that all changed when the new and improved, Web 2.0 makes its debut.
Web 2.0 came about not long after Web 1.0 went public. Unlike the uninteresting, one-way Web 1.0, Web 2.0 encouraged its users to socialize. It was interactive, which in return brought upon various social networking sites. Besides that, it features user-generated content through the act of blogging, tagging, social networking and social bookmarking. This then led to a newer, more intelligent version of the web, formally known as Web 3.0. Before we go into the specifics of Web 3.0, below is a table featuring the comparison between Web 1.0 and Web 2.0.
Web 3.0 isn’t a far cry from what we know the Internet to be today, but rather a continuation of existing techniques. Web 3.0 utilizes recommender systems, which take on a personal approach. Online polls, ratings, comments and reviews are all examples of recommender systems. In addition to that, it also employs smart systems that anticipates and/or calculates user preferences.
The Internet transforms gradually until it is invisibly present in our everyday lives and appliances, and when all these appliances start communication and exchanging information between one another via the Internet, they help make our everyday lives easier by providing us humans with additional services. Services develop because data from more sources can be linked easily and with the wireless infrastructure, these new Internet services are made available to us anytime and anywhere. To back up my point, social networking sites often cross-reference schedules and events present in all your registered social networking accounts then automatically add them to the your online calendar. While the Internet may show signs of steady growth, the look and feel of the Internet changes ever so rapidly. The evolution from Web 1.0 to Web 2.0 and finally Web 3.0 is astounding. It’s amazing how much the Internet had grown and how fast it is growing. Not to mention, it’s interesting to know what future computer scientists will bring.
It took me a while to actually wrap my head around what this video was trying to communicate. It was definitely more of a struggle as compared all the other ones. What it basically says is this. HTML determines the design structure of a web document and a design structure of a web document will never be short of structural elements such as
- , which refer to “paragraph” and “list item” respectively. As
HTML developed, stylistic elements like <b> for bold and <i> for italic were added. These stylistic elements went on to controlling how content would be formatted. After that, form and content became indivisible. It was difficult to differentiate one from the other. XML was invented to solve this problem. It helped separate the two by introducing more structural elements – <title>, <description>, <link> and <image> to name a few.
On the other hand, XML facilitates automated data exchange and the exchanged data is then organized into their own categories. But who exactly organizes the data? The answers is simple. The data is organized through the act of tagging, which is easily done by Internet users just like you and I. When we post and tag pictures or posts, we are teaching the machine and every time we copy a link, we teach it an idea. Web 2.0 is all about connecting people and because people are given the freedom to share, contribute and collaborate with one another, we need to reexamine a handful of things, including the copyright, ethics, privacy and identity.
The Semantic Web is all for helping the computer understand what it’s showing us. The web came up with a way us to retrieve and view information as we please. When we type in a site address in the browser, the browser sends a request to the website, telling them to pull up the required content stored at the given address. The website then retrieves the information and sends it back to the web browser in the form of HTML codes, and finally these codes are analyzed, understood and later displayed by the computer. But even though computers may be taught to interact with one another, they don’t actually understand what they are saying or the semantics of the web page. Computers need to make sense of what they are showing us so they could assist us more effectively. As soon as they can fully grasp the semantics of the content(s), it can help us interact with one another better and at the same time, search engines would also be more accurate with their results.
Lastly, the Semantic Web is something all of us use on a daily basis. But not all of us is aware of its existence, in the sense that we’ve already too familiar with it, so familiar that we don’t even realize that it’s there. Many people use things without actually knowing how it’s made or how exactly it functions. I found this video very compelling and upon watching it, it got me curious about how things, especially gadgets work.
Now that we know what the Semantic Web is all about, the next question to ask ourselves is how these computers manage to first analyze the topic we’re exploring then recommend other sites or posts that it deems useful or relevant to our topic of interest. The answer to that question is as such. The Semantic Web describes the correlations between things and the properties of things. To illustrate my point, when we look up diseases on the web, it will not only get back to us with information regarding the disease itself, it will also display links to posts discussing the symptoms and treatments for the disease. Inventor of the World Wide Web, Tim Berners-Lee, set out to make all types of data available to everyone and anyone using the web and he succeeded in that aspect.
Next, the data web, like the document web discussed above, involves standards and while the document web is represented in the form of HTML codes, the data web is represented in RDF, otherwise known as Resource Description Framework. The RDF describes Web resources such as the title, author, modification data, content and copyright information of a web page. Besides that, putting information into RDF files, allow computer programs or “web spiders” to search, discover, pick up, collect, analyze and process information from the web.
If HTML and the Web made all the online documents look like one huge book, RDF, schema, and inference languages will make all the data in the world look like one huge database – Tim Berners-Lee, Weaving the Web, 1999
Whether you’re a scientist, researcher, financial analyst or even a stay-at-home mom, the data web is no doubt an essential tool to all of us. Lastly, the Semantic Web had come a long way and it will continue evolving and improving to help make our lives more comfortable.
Epic 2015 is a speculative prediction of what the future might hold for us. It predicts the end of print media and the way news is published and read by the masses. Its forecast may not be flawless, but it still is scarily accurate at times. For instance, it predicted Google Maps, Apple’s iPhone, social networking, etc, but lets now dwell in the past. Epic 2015 also predicted an evolving personalized info construct, also referred to as EPIC. EPIC produces a custom content package for each user. Things like the user’s choices, consumption habits, interests, and demographic are all taken into account in order to shape the product to its finest.
Subsequently, it predicted the New York Times to go offline in the year 2014 and as for the year 2015, it predicted that people everywhere would start tagging their broadcasts with GPS data, which in return changes the way news travels and potentially increase its efficiency and relevance. The future without a doubt sounds intriguing. However, whether or not this becomes a reality is still yet to be determined.
The invention of the World Wide Web by Tim Berners-Lee brought upon other web browsers such as ViolaWWW, Line Mode Browser, Erwise and so forth. These browsers, however, were incredibly boring to look at and required the aid of external applications just so that its multimedia content could be accessed. Besides that, the Internet back then was a tool for academics, scientists and researchers and consisted of line after line of text. It lacked imagery and material. There was nothing worth finding and nothing to find it.
Then came along a computer science student from the University of Illinois named Marc Andreesen who had a plan to revolutionize the Internet and turn it into something people would use in their daily lives. So one day, he and a group of enthusiasts worked on making the Internet more accessible by adding graphics, audio and video capabilities. Together, they came up with the world’s first graphical web browser, otherwise known as Mosaic. The browser was launched in the fall of 1993 and was downloadable free of charge. Mosaic went viral. It had gone, and I quote,
from a toy for geeks and a tool for scientists, to a bona fide mass medium.
But for Mosaic to go worldwide, it needed funding and it needed lots of it. James H. Clark, founder of Silicon graphics came to their rescue. After hearing news about Mosaic and its success, James personally reached out to Marc and insisted they start a software company together. After a ton of agreements and paper signing, James and Marc then went on a recruiting spree and in the summer of 1994, James and Marc had launched Netscape, the fastest growing software company the world had ever seen.
After months of nonstop coding Netscape finally launched their new browser called Navigator. Navigator, like Mosaic, was a success. Netscape’s Navigator had revolutionized the Internet and this drove Bill Gates, CEO of Microsoft at that time, up the wall. This was where Bill Gates finally recognized the importance of the web. He then set out to correct the mistake he had made, his mistake being him undermining Netscape.
In 1995, Navigator was a huge hit. It’s importance was compared to TV and print media. Netscape was at the peak of their success and Microsoft was ready to go to extreme lengths to bury them. Microsoft was ruthless and relentless. After hearing Marc Andreesen, co-founder of Netscape talk trash about Microsoft, Gates rounded up his best troops and prepared for war. He fought back with his very own, free of charge, web browser, Internet Explorer. Gates had a plan and it was to analyze and imitate Netscape’s every move. On top of that, he had his team of qualified salesmen stop PC manufacturers from installing anything other than Internet Explorer on the windows platform. This caused Netscape to lose a big fraction of its revenue and eventually the company itself.
By September of 1997, the browser wars were over. Microsoft was once again back on top and given complete control of the software platform. However, that’s not the end of the story. The war might have been over for Netscape, but for Microsoft, it was merely the beginning. Microsoft had itself a new rival – the U.S. Department of Justice. In the year 1998, Microsoft was accused of using its windows monopoly to prevent consumers from accessing Netscape’s products.
It doesn’t help the consumers, it doesn’t help Netscape, the only one who benefits from it is Microsoft – Anti-Trust Lawyer, Gary Reback
Bill Gates was found guilty soon after the court hearing begun and was forced to step down as CEO of Microsoft. Since then, many other web browsers had emerged on the market. One of the many being the famous Mozilla Firefox. Firefox is a byproduct of Netscape. It was named after the original code name for Netscape’s browser. After Microsoft’s historical win, they decided to divert their attention and focus on other things while Firefox has been gradually improving itself. Now, all the hard work over the years had paid off as Firefox is, and has been for a while, now viewed as technically superior to current versions of Internet Explorer. According to the data collected using NetApplications, Firefox beat Microsoft by getting almost twice as much usage in half the time.
As of March 26, Firefox 4 was seeing a 3.64 percent share of browser usage after only being available for 5 days. IE9, which launched just a week earlier had 1.78 percent after 12 days – NBC News
Conclusively, the browser wars is an ongoing and never-ending battle. Question is, which browser will revolutionize the Internet we know today? One thing we can be sure of is that, as of now, the reign of Microsoft is over.