Wednesday 18 March 2015

About online Trading/Shopping

History of online Trading/Shopping

english entrepreneur Michael Aldrich invented the first online shopping on 1975. One of the first online trading site was Game Trading Zone. 
Statistics show that in 2012, Asia-Pacific increased their international sales over 30% giving them over $433 billion in revenue. That is a $69 billion difference between the U.S. revenue of $364.66 billion. It is estimated that Asia-Pacific will increase by another 30% in the year 2013 putting them ahead by more than one-third of all global ecommerce sales.
The largest online shopping day in the world is Singles Day, with sales just in Alibaba's sites at US$9.3 billion in 2014.
Statistics show that in 2012, Asia-Pacific increased their international sales over 30% giving them over $433 billion in revenue. That is a $69 billion difference between the U.S. revenue of $364.66 billion. It is estimated that Asia-Pacific will increase by another 30% in the year 2013 putting them ahead by more than one-third of all global ecommerce sales.
The largest online shopping day in the world is Singles Day, with sales just in Alibaba's sites at US$9.3 billion in 2014.

Video Games ! Now you're talking !

Platforms

The term "platform" refers to the specific combination of electronic components or computer hardware which, in conjunction with software, allows a video game to operate.The term "system" is also commonly used.
In common use a "PC game" refers to a form of media that involves a player interacting with an IBM PC compatible personal computer connected to a video monitor. A "console game" is played on a specialized electronic device that connects to a common television set or composite video monitor. A "handheld" gaming device is a self-contained electronic device that is portable and can be held in a user's hands. These distinctions are not always clear and there may be games that bridge one or more platforms. In addition to personal computers, there are multiple other devices which have the ability to play games but are not dedicated video game machines, such as mobile phones, PDAs and graphing calculators.
This in turn has generated new terms to qualify classes of web browser based games. These games may be identified based on the website that they appear, such as with "Facebook" games. Others are named based on the programming platform used to develop them, such as Java and Flash games.

Genres

A video game, like most other forms of media, may be categorized into genres based on many factors such as method of game play, types of goals, art style, interactivity and more. Because genres are dependent on content for definition, genres have changed and evolved as newer styles of video games have come into existence. Ever advancing technology and production values related to video game development have fostered more lifelike and complex games which have in turn introduced or enhanced genre possibilities (e.g., virtual pets), pushed the boundaries of existing video gaming or in some cases add new possibilities in play (such as that seen with titles specifically designed for devices like Sony's EyeToy). Some genres represent combinations of others, such as massively multiplayer online role-playing games, or, more commonly, MMORPGs. It is also common to see higher level genre terms that are collective in nature across all other genres such as with action, music/rhythm or horror-themed video games.

Classifications

Casual games

Casual games derive their name from their ease of accessibility, simple to understand gameplay and quick to grasp rule sets. Additionally, casual games frequently support the ability to jump in and out of play on demand. Casual games as a format existed long before the term was coined and include video games such as Solitaire or Minesweeper which can commonly be found pre-installed with many versions of the Microsoft Windows operating system.
Examples of genres within this category are hidden object, match three, time management, tetris or many of the tower defense style games. Casual games are generally sold through online retailers such as PopCapZylomVans Video Games and GameHouse or provided for free play through web portals such as Newgrounds.
While casual games are most commonly played on personal computers, cellphones or PDAs, they can also be found on many of the on-line console system download services (e.g., Xbox Live, the PlayStation Network, or WiiWare).

Serious games

Serious games are games that are designed primarily to convey information or a learning experience of some sort to the player. Some serious games may even fail to qualify as a video game in the traditional sense of the term. Educational software does not typically fall under this category (e.g., touch typing tutors, language learning, etc.) and the primary distinction would appear to be based on the title's primary goal as well as target age demographics. As with the other categories, this description is more of a guideline than a rule.
Serious games are games generally made for reasons beyond simple entertainment and as with the core and casual games may include works from any given genre, although some such as exergameseducational games, or propaganda games may have a higher representation in this group due to their subject matter. These games are typically designed to be played by professionals as part of a specific job or for skill set improvement. They can also be created to convey social-political awareness on a specific subject.
Tactical media in video games plays a crucial role in making a statement or conveying a message on important relevant issues. This form of media allows for a broader audience to be able to receive and gain access to certain information that otherwise may not have reached such people. An example of tactical media in video games would benewsgames. These are short games related to contemporary events designed to illustrate a point. For example, Take Action Games is a game studio collective that was co-founded by Susana Ruiz and has made successful serious games. Some of these games include Darfur is DyingFinding Zoe, and In The Balance. All of these games bring awareness to important issues and events in an intelligent and well thought out manner.

Educational games

On 23 September 2009, U.S. President Barack Obama launched a campaign called "Educate to Innovate" aimed at improving the technological, mathematical, scientific and engineering abilities of American students. This campaign states that it plans to harness the power of interactive games to help achieve the goal of students excelling in these departments. This campaign has stemmed into many new opportunities for the video game realm and has contributed to many new competitions. Some of these competitions include the Stem National Video Game Competition and the Imagine Cup. Both of these examples are events that bring a focus to relevant and important current issues that are able to be addressed in the sense of video games to educate and spread knowledge in a new form of media. www.NobelPrize.org uses games to entice the user to learn about information pertaining to the Nobel prize achievements while engaging in a fun to play video game. There are many different types and styles of educational games all the way from counting to spelling to games for kids and games for adults. Some other games do not have any particular targeted audience in mind and intended to simply educate or inform whoever views or plays the game.

Development

In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some cellphones and PDAs).Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers.
With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need to be able to pay their staff a competitive wage in order to attract and retain the best talent, while publishers are constantly looking to keep costs down in order to maintain profitability on their investment. Typically, a video game console development team can range in sizes of anywhere from 5 to 50 people, with some teams exceeding 100. In May 2009, one game project was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products.

Downloadable content

A phenomenon of additional game content at a later date, often for additional funds, began with digital video game distribution known as downloadable content (DLC). Developers can use digital distribution to issue new storylines after the main game is released, such as Rockstar Games with Grand Theft Auto IV (The Lost and Damned and The Ballad of Gay Tony), or Bethesda with Fallout 3 and its expansions. New gameplay modes can also become available, for instance, Call of Duty and its zombie modes,[36][37][38] a multiplayer mode for Mushroom Wars or a higher difficulty level for Metro: Last Light. Smaller packages of DLC are also common, ranging from better in-game weapons (Dead SpaceJust Cause 2), character outfits (LittleBigPlanetMinecraft), or new songs to perform (SingStarRock BandGuitar Hero).

Modifications

Many games produced for the PC are designed such that technically oriented consumers can modify the game. These mods can add an extra dimension of replayability and interest. Developers such as id SoftwareValve SoftwareCrytekBethesdaEpic Games and Blizzard Entertainment ship their games with some of the development tools used to make the game, along with documentation to assist mod developers. The Internet provides an inexpensive medium to promote and distribute mods, and they may be a factor in the commercial success of some games. This allows for the kind of success seen by popular mods such as the Half-Life mod Counter-Strike.

Cheating

Cheating in computer games may involve cheat codes and hidden spots implemented by the game developers, modification of game code by third parties, or players exploiting a software glitch. Modifications are facilitated by either cheat cartridge hardware or a software trainer. Cheats usually make the game easier by providing an unlimited amount of some resource; for example weapons, health, or ammunition; or perhaps the ability to walk through walls. Other cheats might give access to otherwise unplayable levels or provide unusual or amusing features, like altered game colors or other graphical appearances.

Glitches

Software errors not detected by software testers during development can find their way into released versions of computer and video games. This may happen because the glitch only occurs under unusual circumstances in the game, was deemed too minor to correct, or because the game development was hurried to meet a publication deadline. Glitches can range from minor graphical errors to serious bugs that can delete saved data or cause the game to malfunction. In some cases publishers will release updates (referred to aspatches) to repair glitches. Sometimes a glitch may be beneficial to the player; these are often referred to as exploits.

Easter eggs

Easter eggs are hidden messages or jokes left in games by developers that are not part of the main game.

Theory

Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of whatJanet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which we get to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as TroneXistenZ and The Last Starfighter.
Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player.
While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow and/or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game.

Social aspects

Demographics

The November 2005 Nielsen Active Gamer Study, taking a survey of 2,000 regular gamers, found that the U.S. games market is diversifying. The age group among male players has expanded significantly in the 25–40 age group. For casual online puzzle-style and simple mobile cell phone games, the gender divide is more or less equal between males and females. Females have also been found to show an attraction to online multi-player games where there is a communal experience.[citation needed] More recently there has been a growing segment of female players engaged with the aggressive style of games historically considered to fall within traditionally male genres (e.g., first-person shooters). According to the ESRB almost 41% of PC gamers are women
When comparing today's industry climate with that of 20 years ago, women and many adults are more inclined to be using products in the industry. While the market for teen and young adult men is still a strong market, it is the other demographics which are posting significant growth. The Entertainment Software Association (ESA) provides the following summary for 2011 based on a study of almost 1,200 American households carried out by Ipsos MediaCT

Multiplayer

Video gaming has traditionally been a social experience. Multiplayer video games are those that can be played either competitively, sometimes in Electronic Sports, or cooperatively by using either multiple input devices, or by hotseatingTennis for Two, arguably the first video game, was a two player game, as was its successor Pong. The first commercially available game console, the Magnavox Odyssey, had two controller inputs.
Since then, most consoles have been shipped with two or four controller inputs. Some have had the ability to expand to four, eight or as many as 12 inputs with additional adapters, such as the Multitap. Multiplayer arcade games typically feature play for two to four players, sometimes tilting the monitor on its back for a top-down viewing experience allowing players to sit opposite one another.
Many early computer games for non-PC descendant based platforms featured multiplayer support. Personal computer systems from Atari and Commodore both regularly featured at least two game ports. PC-based computer games started with a lower availability of multiplayer options because of technical limitations. PCs typically had either one or no game ports at all. Network games for these early personal computers were generally limited to only text based adventures or MUDs that were played remotely on a dedicated server. This was due both to the slow speed of modems (300-1200-bit/s), and the prohibitive cost involved with putting a computer online in such a way where multiple visitors could make use of it. However, with the advent of widespread local area networking technologies and Internet based online capabilities, the number of players in modern games can be 32 or higher, sometimes featuring integrated text and/or voice chat. MMOs can offer extremely high numbers of simultaneous players; Eve Online set a record with 65,303 players on a single server in 2013.


Networking

Properties

Computer networking may be considered a branch of electrical engineeringtelecommunicationscomputer scienceinformation technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines.
A computer network facilitates interpersonal communications allowing people to communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing. Providing access to information on shared storage devices is an important feature of many networks. A network allows sharing of files, data, and other types of information giving authorized users the ability to access information stored on other computers on the network. A network allows sharing of network and computing resources. Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer. Distributed computinguses computing resources across a network to accomplish tasks. A computer network may be used by computer Crackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network (denial of service). A complex computer network may be difficult to set up. It may be costly to set up an effective computer network in a large organization.

Network packet

Main article: Network packet
Computer communication links that do not support packets, such as traditional point-to-point telecommunication links, simply transmit data as a bit stream. However, most information in computer networks is carried in packets. A network packet is a formatted unit of data (a list of bits or bytes, usually a few tens of bytes to a few kilobytes long) carried by a packet-switched network.
In packet networks, the data is formatted into packets that are sent through the network to their destination. Once the packets arrive they are reassembled into their original message. With packets, the bandwidth of the transmission medium can be better shared among users than if the network were circuit switched. When one user is not sending packets, the link can be filled with packets from others users, and so the cost can be shared, with relatively little interference, provided the link isn't overused.
Packets consist of two kinds of data: control information and user data (also known as payload). The control information provides data the network needs to deliver the user data, for example: source and destination network addresseserror detection codes, and sequencing information. Typically, control information is found in packet headers and trailers, with payload data in between.
Often the route a packet needs to take through a network is not immediately available. In that case the packet is queued and waits until a link is free.

Network topology


The physical layout of a network is usually less important than the topology that connects network nodes. Most diagrams that describe a physical network are therefore topological, rather than geographic. The symbols on these diagrams usually denote network links and network nodes.

Network links

The transmission media (often referred to in the literature as the physical media) used to link devices to form a computer network include electrical cable (EthernetHomePNA,power line communicationG.hn), optical fiber (fiber-optic communication), and radio waves (wireless networking). In the OSI model, these are defined at layers 1 and 2 — the physical layer and the data link layer.
A widely adopted family of transmission media used in local area network (LAN) technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared signals as a transmission medium. Power line communication uses a building's power cabling to transmit data.

Wired technologies


Coaxial cable
 is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire surrounded by an insulating layer (typically a flexible material with a high dielectric constant), which itself is surrounded by a conductive layer. The insulation helps minimize interference and distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second.The orders of the following wired technologies are, roughly, from slowest to fastest transmission speed.
  • Twisted pair wire is the most widely used medium for all telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cabling (wired Ethernet as defined byIEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each form comes in several category ratings, designed for use in various scenarios.
  • An optical fiber is a glass fiber. It carries pulses of light that represent data. Some advantages of optical fibers over metal wires are very low transmission loss and immunity from electrical interference. Optical fibers can simultaneously carry multiple wavelengths of light, which greatly increases the rate that data can be sent, and helps enable data rates of up to trillions of bits per second. Optic fibers can be used for long runs of cable carrying very high data rates, and are used for undersea cables to interconnect continents.
Price is a main factor distinguishing wired- and wireless-technology options in a business. Wireless options command a price premium that can make purchasing wired computers, printers and other devices a financial benefit. Before making the decision to purchase hard-wired technology products, a review of the restrictions and limitations of the selections is necessary. Business and employee needs may override any cost considerations.

Wireless technologies


Terrestrial microwave
 – Terrestrial microwave communication uses Earth-based transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the low-gigahertz range, which limits all communications to line-of-sight. Relay stations are spaced approximately 48 km (30 mi) apart.Main article: Wireless network
  • Communications satellites – Satellites communicate via microwave radio waves, which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular and PCS systems use several radio communications technologies. The systems divide the region covered into multiple geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
  • Radio and spread spectrum technologies – Wireless local area networks use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. IEEE 802.11 defines a common flavor of open-standards wireless radio-wave technology known as Wifi.

Exotic technologies

There have been various attempts at transporting data over exotic media:
  • Extending the Internet to interplanetary dimensions via radio waves.
Both cases have a large round-trip delay time, which gives slow two-way communication, but doesn't prevent sending large amounts of information.

Network nodes

Apart from any physical transmission medium there may be, networks comprise additional basic system building blocks, such as network interface controller (NICs), repeaters,hubsbridgesswitchesroutersmodems, and firewalls.

Network interfaces


The NIC responds to traffic addressed to a 
network address for either the NIC or the computer as a whole.network interface controller (NIC) is computer hardware that provides a computer with the ability to access the transmission media, and has the ability to process low-level network information. For example the NIC may have a connector for accepting a cable, or an aerial for wireless transmission and reception, and the associated circuitry.
In Ethernet networks, each network interface controller has a unique Media Access Control (MAC) address—usually stored in the controller's permanent memory. To avoid address conflicts between network devices, the Institute of Electrical and Electronics Engineers(IEEE) maintains and administers MAC address uniqueness. The size of an Ethernet MAC address is six octets. The three most significant octets are reserved to identify NIC manufacturers. These manufacturers, using only their assigned prefixes, uniquely assign the three least-significant octets of every Ethernet interface they produce.

Repeaters and hubs

repeater is an electronic device that receives a network signal, cleans it of unnecessary noise, and regenerates it. The signal is retransmitted at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. With fiber optics, repeaters can be tens or even hundreds of kilometers apart.
A repeater with multiple ports is known as a hub. Repeaters work on the physical layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay that affects network performance. As a result, many network architectures limit the number of repeaters that can be used in a row, e.g., the Ethernet 5-4-3 rule.
Hubs have been mostly obsoleted by modern switches; but repeaters are used for long distance links, notably undersea cabling.

Bridges

network bridge connects and filters traffic between two network segments at the data link layer (layer 2) of the OSI model to form a single network. This breaks the network's collision domain but maintains a unified broadcast domain. Network segmentation breaks down a large, congested network into an aggregation of smaller, more efficient networks.
Bridges come in three basic types:
  • Local bridges: Directly connect LANs
  • Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
  • Wireless bridges: Can be used to join LANs or connect remote devices to LANs.

Switches

network switch is a device that forwards and filters OSI layer 2 datagrams between ports based on the MAC addresses in the packets. A switch is distinct from a hub in that it only forwards the frames to the physical ports involved in the communication rather than all ports connected. It can be thought of as a multi-port bridge. It learns to associate physical ports to MAC addresses by examining the source addresses of received frames. If an unknown destination is targeted, the switch broadcasts to all ports but the source. Switches normally have numerous ports, facilitating a star topology for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or additional logical levels. The term switch is often used loosely to include devices such as routers and bridges, as well as devices that may distribute traffic based on load or based on application content (e.g., a Web URL identifier).

Router

router is an internetworking device that forwards packets between networks by processing the routing information included in the packet or datagram (Internet protocol information from layer 3). The routing information is often processed in conjunction with the routing table (or forwarding table). A router uses its routing table to determine where to forward packets. (A destination in a routing table can include a "null" interface, also known as the "black hole" interface because data can go into it, however, no further processing is done for said data.)
Modems
Modems (MOdulator-DEModulator) are used to connect network nodes via wire not originally designed for digital network traffic, or for wireless. To do this one or more frequencies are modulated by the digital signal to produce an analog signal that can be tailored to give the required properties for transmission. Modems are commonly used for telephone lines, using a Digital Subscriber Line technology.

Firewalls

firewall is a network device for controlling network security and access rules. Firewalls are typically configured to reject access requests from unrecognized sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in cyber attacks.

Network structur

Network topology is the layout or organizational hierarchy of interconnected nodes of a computer network. Different network topologies can affect throughput, but reliability is often more critical. With many technologies, such as bus networks, a single failure can cause the network to fail entirely. In general the more interconnections there are, the more robust the network is; but the more expensive it is to install.

Common layouts

A bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2.Common layouts are:

  • star network: all nodes are connected to a special central node. This is the typical layout found in a Wireless LAN, where each wireless client connects to the central Wireless access point.
  • ring network: each node is connected to its left and right neighbour node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. The Fiber Distributed Data Interface (FDDI) made use of such a topology.
  • mesh network: each node is connected to an arbitrary number of neighbours in such a way that there is at least one traversal from any node to any other.
  • fully connected network: each node is connected to every other node in the network.
  • tree network: nodes are arranged hierarchically.
Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is often a star, because all neighboring connections can be routed via a central physical location.

Overlay network


Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using
 modems, before any data network existed.An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay network are connected by virtual or logical links. Each link corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one. For example, many peer-to-peer networks are overlay networks. They are organized as nodes of a virtual system of links that run on top of the Internet.
The most striking example of an overlay network is the Internet itself. The Internet itself was initially built as an overlay on the telephone network. Even today, at the network layer, each node can reach any other by a direct connection to the desired IP address, thereby creating a fully connected network. The underlying network, however, is composed of a mesh-like interconnect of sub-networks of varying topologies (and technologies). Address resolution and routing are the means that allow mapping of a fully connected IP overlay network to its underlying network.
Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually a map) indexed by keys.
Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServDiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network.[citation needed]On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay network has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes that a message traverses before it reaches its destination.
For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast, resilient routing and quality of service studies, among others.

Communications protocols


Communication protocols have various characteristics. They may be
 connection-oriented or connectionless, they may use circuit mode or packet switching, and they may use hierarchical addressing or flat addressing.
Whilst the use of protocol layering is today ubiquitous across the field of computer networking, it has been historically criticized by many researchers
[13] for two principal reasons. Firstly, abstracting the protocol stack in this way may cause a higher layer to duplicate functionality of a lower layer, a prime example being error recovery on both a per-link basis and an end-to-end basis.[14] Secondly, it is common that a protocol implementation at one layer may require data, state or addressing information that is only present at another layer, thus defeating the point of separating the layers in the first place. For example, TCP uses the ECN field in the IPv4 header as an indication of congestion; IP is a network layer protocol whereas TCP is a transport layer protocol.communications protocol is a set of rules for exchanging information over network links. In a protocol stack (also see the OSI model), each protocol leverages the services of the protocol below it. An important example of a protocol stack is HTTP (the World Wide Web protocol) running over TCP over IP (the Internet protocols) over IEEE 802.11 (the Wi-Fi protocol). This stack is used between the wireless router and the home user's personal computer when the user is surfing the web.
There are many communication protocols, a few of which are described below.

IEEE 802

The complete IEEE 802 protocol suite provides a diverse set of networking capabilities. The protocols have a flat addressing scheme. They operate mostly at levels 1 and 2 of the OSI model.
For example, MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree ProtocolIEEE 802.1Qdescribes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol, which forms the basis for the authentication mechanisms used in VLANs (but it is also found in WLANs) – it is what the home user sees when the user has to enter a "wireless access key".

Ethernet

Ethernet, sometimes simply called LAN, is a family of protocols used in wired LANs, described by a set of standards together calledIEEE 802.3 published by the Institute of Electrical and Electronics Engineers.

Wireless LAN

Wireless LAN, also widely known as WLAN or WiFi, is probably the most well-known member of the IEEE 802 protocol family for home users today. It is standarized by IEEE 802.11 and shares many properties with wired Ethernet.

Internet Protocol Suite

The Internet Protocol Suite, also called TCP/IP, is the foundation of all modern networking. It offers connection-less as well as connection-oriented services over an inherently unreliable network traversed by data-gram transmission at the Internet protocol (IP) level. At its core, the protocol suite defines the addressing, identification, and routing specifications for Internet Protocol Version 4 (IPv4) and for IPv6, the next generation of the protocol with a much enlarged addressing capability.

SONET/SDH

Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed,circuit-switched voice encoded in PCM (Pulse-Code Modulation) format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.