I’ve recently been through some student entries about the history of the Internet on LinkedIn. I’ve published a post in regards to the same topic on Storify. Here is what I’ve gathered from my readings.
I’ve decided to discuss the two videos at the same time as they tend to overlap. Anyway, it has been a while since Web 1.0 first emerged on the market and when it did, it mainly gave out information. Web 1.0, or rather the person behind it, had full control of what to publish and what not to publish and as a result of this, people only knew as much, or as little, as the Internet had to offer. In other words, it was authoritative, and so people surfing the net on Web 1.0 were greatly restricted in terms of content as the material published on the Internet were inadequate and often times completely irrelevant. Not forgetting the fact that Web 1.0 is passive and was not at all interactive with its users. But that all changed when the new and improved, Web 2.0 makes its debut.
Web 2.0 came about not long after Web 1.0 went public. Unlike the uninteresting, one-way Web 1.0, Web 2.0 encouraged its users to socialize. It was interactive, which in return brought upon various social networking sites. Besides that, it features user-generated content through the act of blogging, tagging, social networking and social bookmarking. This then led to a newer, more intelligent version of the web, formally known as Web 3.0. Before we go into the specifics of Web 3.0, below is a table featuring the comparison between Web 1.0 and Web 2.0.
Web 3.0 isn’t a far cry from what we know the Internet to be today, but rather a continuation of existing techniques. Web 3.0 utilizes recommender systems, which take on a personal approach. Online polls, ratings, comments and reviews are all examples of recommender systems. In addition to that, it also employs smart systems that anticipates and/or calculates user preferences.
The Internet transforms gradually until it is invisibly present in our everyday lives and appliances, and when all these appliances start communication and exchanging information between one another via the Internet, they help make our everyday lives easier by providing us humans with additional services. Services develop because data from more sources can be linked easily and with the wireless infrastructure, these new Internet services are made available to us anytime and anywhere. To back up my point, social networking sites often cross-reference schedules and events present in all your registered social networking accounts then automatically add them to the your online calendar. While the Internet may show signs of steady growth, the look and feel of the Internet changes ever so rapidly. The evolution from Web 1.0 to Web 2.0 and finally Web 3.0 is astounding. It’s amazing how much the Internet had grown and how fast it is growing. Not to mention, it’s interesting to know what future computer scientists will bring.
It took me a while to actually wrap my head around what this video was trying to communicate. It was definitely more of a struggle as compared all the other ones. What it basically says is this. HTML determines the design structure of a web document and a design structure of a web document will never be short of structural elements such as
- , which refer to “paragraph” and “list item” respectively. As
HTML developed, stylistic elements like <b> for bold and <i> for italic were added. These stylistic elements went on to controlling how content would be formatted. After that, form and content became indivisible. It was difficult to differentiate one from the other. XML was invented to solve this problem. It helped separate the two by introducing more structural elements – <title>, <description>, <link> and <image> to name a few.
On the other hand, XML facilitates automated data exchange and the exchanged data is then organized into their own categories. But who exactly organizes the data? The answers is simple. The data is organized through the act of tagging, which is easily done by Internet users just like you and I. When we post and tag pictures or posts, we are teaching the machine and every time we copy a link, we teach it an idea. Web 2.0 is all about connecting people and because people are given the freedom to share, contribute and collaborate with one another, we need to reexamine a handful of things, including the copyright, ethics, privacy and identity.
The Semantic Web is all for helping the computer understand what it’s showing us. The web came up with a way us to retrieve and view information as we please. When we type in a site address in the browser, the browser sends a request to the website, telling them to pull up the required content stored at the given address. The website then retrieves the information and sends it back to the web browser in the form of HTML codes, and finally these codes are analyzed, understood and later displayed by the computer. But even though computers may be taught to interact with one another, they don’t actually understand what they are saying or the semantics of the web page. Computers need to make sense of what they are showing us so they could assist us more effectively. As soon as they can fully grasp the semantics of the content(s), it can help us interact with one another better and at the same time, search engines would also be more accurate with their results.
Lastly, the Semantic Web is something all of us use on a daily basis. But not all of us is aware of its existence, in the sense that we’ve already too familiar with it, so familiar that we don’t even realize that it’s there. Many people use things without actually knowing how it’s made or how exactly it functions. I found this video very compelling and upon watching it, it got me curious about how things, especially gadgets work.
Now that we know what the Semantic Web is all about, the next question to ask ourselves is how these computers manage to first analyze the topic we’re exploring then recommend other sites or posts that it deems useful or relevant to our topic of interest. The answer to that question is as such. The Semantic Web describes the correlations between things and the properties of things. To illustrate my point, when we look up diseases on the web, it will not only get back to us with information regarding the disease itself, it will also display links to posts discussing the symptoms and treatments for the disease. Inventor of the World Wide Web, Tim Berners-Lee, set out to make all types of data available to everyone and anyone using the web and he succeeded in that aspect.
Next, the data web, like the document web discussed above, involves standards and while the document web is represented in the form of HTML codes, the data web is represented in RDF, otherwise known as Resource Description Framework. The RDF describes Web resources such as the title, author, modification data, content and copyright information of a web page. Besides that, putting information into RDF files, allow computer programs or “web spiders” to search, discover, pick up, collect, analyze and process information from the web.
If HTML and the Web made all the online documents look like one huge book, RDF, schema, and inference languages will make all the data in the world look like one huge database – Tim Berners-Lee, Weaving the Web, 1999
Whether you’re a scientist, researcher, financial analyst or even a stay-at-home mom, the data web is no doubt an essential tool to all of us. Lastly, the Semantic Web had come a long way and it will continue evolving and improving to help make our lives more comfortable.
Epic 2015 is a speculative prediction of what the future might hold for us. It predicts the end of print media and the way news is published and read by the masses. Its forecast may not be flawless, but it still is scarily accurate at times. For instance, it predicted Google Maps, Apple’s iPhone, social networking, etc, but lets now dwell in the past. Epic 2015 also predicted an evolving personalized info construct, also referred to as EPIC. EPIC produces a custom content package for each user. Things like the user’s choices, consumption habits, interests, and demographic are all taken into account in order to shape the product to its finest.
Subsequently, it predicted the New York Times to go offline in the year 2014 and as for the year 2015, it predicted that people everywhere would start tagging their broadcasts with GPS data, which in return changes the way news travels and potentially increase its efficiency and relevance. The future without a doubt sounds intriguing. However, whether or not this becomes a reality is still yet to be determined.