Friday, June 12, 2009

Technical Foundation

The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success.

The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture.

Resulting discussions and final standards are published in a series of publications each of which is called a Request for Comment (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards.

These standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program, and just below it is the Transport Layer which connects applications on different hosts via the network (e.g., client-server model) with appropriate data exchange methods. Underlying these layers are the actual networking technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer that provides connectivity between hosts on the same local network link (therefor called Link Layer), such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking.

The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011. A new protocol version, IPv6, was developed which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion.

IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development.
Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.

Internet Structure

The Internet and its structure have been studied extensively. For example, it has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks.


Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations).


Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system". The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data transfer rates exhibit temporal self-similarity. Further adding to the complexity of the Internet is the ability of more than one computer to use the Internet through only one node, thus creating the possibility for a very deep and hierarchical sub-network that can theoretically be extended infinitely (disregarding the programmatic limitations of the IPv4 protocol). Principles of this architecture date back to the 1960s and it might not be a solution best suited to modern needs. Thus, the possibility of developing alternative structures is currently being looked into.


According to a June 2007 article in Discover magazine, the combined weight of all the electrons moved within the Internet in a day is 0.2 millionths of an ounce. Others have estimated this at nearer 2 ounces (50 grams).

Computer network diagrams often represent the Internet using a cloud symbol from which network communications pass in and out.

Friday, May 8, 2009

Hackers in Business Now

USA has now decided that they gonna give jobs to the hackers to actually keep save from the hackers.Well hackers definetely make them think seriously because every time when they tried to do something almost they have secceded and only few hackers in the world are being caught for their hacking so now USA becomes the 1st country to give jobs to the hackers.