Saturday, November 22, 2008

The Blast Of Atomic Automobiles

Ford has initiated several remarkable discoveries and changes in the automotive industry. One of which is the introduction of Ford Nucleon. The latter is a nuclear-powered concept car divulged by Ford Motor Company in the year 1958. Said car is powered by a small nuclear reactor found inside the trunk. The miniature version of which can be viewed at the Henry Ford Museum in Detroit, MI.

The most ambitious project of Ford is the Nucleon. It was called automobile-of-the-future because of its outstanding qualities and unique design features. It is a silent car with sleek futuristic look, zero harmful emissions, and incredible fuel mileage. The most unusual about this car is that it has a pint-size atomic fission reactor in its trunk.

Nucleon has a power capsule found at the rear portion between the twin booms. The capsule holds the radioactive core that controls power by acknowledging the performance needs as well as the distance traveled by the car.

In the passenger compartment, Nucleon incorporated luxurious Ford parts which include one-piece windshield and compound rear window which was topped by cantilever roof. Power Ford auto parts are also used in Nucleon. The drive train was integral to the power module. Further, it was reported that cars like nucleon is capable of traveling 5,000 miles or even more without the need of recharging.

The car was never built and produced. One reason for said delay is the development of charging stations which are designed to supplant petroleum fuel stations. Moreover, the more influential reason was the fact that the general public has become more aware of the perils of atomic energy and the concerns about nuclear waste.

However, today there is a loud query about the coming of Ford Nucleon. Will Ford Nucleon blast in the automotive industry or blast in the vastness of oblivion? That is the query that Ford respond to soon.

The Invention Of The Atomic Clocks

Louis Essen was born in 1908 in a small city in England called Nottingham. His childhood was typical of the time and he pursued his education with enjoyment and dedication. At the age of 20 Louis graduated from the University of Nottingham, where he had been studying. It was at this time that his career started to take off, as he was invited to join the NPL, or National Physics Laboratory.

It was during Louis’s time at the NPL that he began working to develop a quartz crystal oscillator as he believed they were capable of measuring time as accurately as a pendulum based clock. Ten years after joining the NPL Louis had invented the Essen ring. This was an eponymous invention which took its name from the shape of the quartz which Louis had used in his latest clock and which was three times more accurate than the previous versions.

Louis soon moved on to newer areas of research and began to study ways to measure the speed of light. During World War II he began to work on high frequency radar and used his technical ability to develop the cavity resonance wavemeter. From 1946 it was this wavemeter which he used, along with a colleague by the name of Albert Gordon-Smith, to make his lightspeed measurements. It has been acknowledged recently that Louis’s measurements were by far the most accurate to have been recorded up until that time.

During the early part of the 1950’s Louis began to take an interest in research which was being carried out at the National Bureau of Standards (NBS) in the United States of America. He learnt that work was being carried out to invent a clock which was more accurate than any other. The American scientists were using the idea of maintaining a clock’s accuracy by using the radiation emitted or absorbed by atoms. At that time the Americans were using a molecule of ammonia but Louis felt that this was not working as well as if they were using different atoms, such as hydrogen or caesium, and so he began working on his own clock using these materials instead.

1953 saw Louis and a colleague, Jack Parry, receiving permission to develop an atomic clock at the NPL based on Louis’s existing knowledge of quartz crystal oscillators and other relevant techniques he had learned from the cavity resonance wavemeter he had previously designed. Only two years later Louis's first atomic clock was running, Caesium I, designed by the UK scientists. Development in the United States had all but stopped due to political difficulties.

Louis continued to work on his atomic clock and by 1964 he had managed to increase the accuracy of the atomic clock from one second in 300 years to one second every 2000 years! The continued success of Louis’s work resulted in the definition of a second being changed from 1/864000 of a mean solar day to being calculated as the time it took for 9192631770 cycles of the radiation in an atomic clock.

Louis Essen died in 1997 and before his death had been honoured with, amongst others, an OBE and the Tompion Gold Medal of the Clockmakers’ Company.

Atomic Energy Research Establishment

In 1945 John Cockcroft was asked to set up a research laboratory in order to further the use of nuclear fission for both military purposes and generating energy. The criteria for selection involved finding somewhere remote with a good water supply, but within reach of good transport links and a university with a nuclear physics laboratory. This more or less limited the choice to Oxford or Cambridge. It had been decided that an RAF airfield would be chosen; the aircraft hangars being ideal to house the large atomic piles that would need to be built. Although Cambridge University had the better nuclear physics facility (the Cavendish Laboratory), the RAF did not want to abandon any of its eastern airfields (because of the new threat of the cold war), therefore Harwell was chosen when the RAF made the airfield available. RAF Harwell, was some sixteen miles south of Oxford near Didcot and the village of Harwell, and on 1 January 1946 the Atomic Energy Research Establishment was formed, coming under the Ministry of Supply. The scientists mostly took over both accommodations and work buildings from the departing RAF.

The early laboratory had several specialist divisions: Chemistry (initially headed by Egon Bretscher, later by Robert Spence), General Physics (H.W.B. Skinner), Nuclear Physics (initially headed by Otto Frisch, later E. Bretscher), Reactor Physics (John Dunworth), Theoretical Physics (Klaus Fuchs, later Brian Flowers), Isotopes (Henry Seligmann) and Engineering (Harold Tongue, later Robert Jackson). Directors after Cockcroft included Basil Schonland, Arthur Vick and Walter Marshall.


[edit] Early reactors
Such was the interest in nuclear power and the priority devoted to it in those days that the first reactor, GLEEP, was operating by 15 August 1947. GLEEP (Graphite Low Energy Experimental Pile) was a low energy (3 kilowatt) graphite-moderated air-cooled reactor. The first reactor in Western Europe, it was remarkably long-lived, operating until 1990.

A successor to GLEEP, called BEPO (British Experimental Pile 0) was constructed based on the experience with GLEEP, and commenced operation in 1948. BEPO was shut down in 1968.

LIDO was an enriched uranium thermal swimming pool reactor which operated from 1956 to 1972 and was mainly used for shielding and nuclear physics experiments. It was fully dismantled and returned to a green field site in 1995.

A pair of larger 26 MW reactors, DIDO and PLUTO, which used enriched uranium with a heavy water moderator came online in 1956 and 1957 respectively. These small reactors were used primarily for testing the behaviour of different materials under intense neutron irradiation to help decide what materials to build reactor components out of. A sample could be irradiated for a few months to simulate the radiation dose which it would receive over the lifetime of a power reactor. They also took over commercial isotope production from BEPO after that was shut down. DIDO and GLEEP themselves were shut down in 1990 and the fuel, moderator and ancillary buildings removed. The GLEEP reactor and the hangar it was situated in were decommissioned 2005. The current plans are to decommission the BEPO, DIDO and PLUTO reactors by 2020.


[edit] Zeta
One of the most significant experiments to occur at AERE was the ZETA fusion power experiment. An early attempt to build a large-scale nuclear fusion reactor, the project was started in 1954, and the first successes were achieved in 1957. In 1958 the project was shut down, as it was believed that no further progress could be made with the kind of design that ZETA represented. (see Timeline of nuclear fusion).


[edit] Organisational history
In 1954 AERE was incorporated into the newly formed United Kingdom Atomic Energy Authority (UKAEA). Harwell and other laboratories were to assume responsibility for atomic energy research and development. It was part of the Department of Trade and Industry.

During the 1980s the slowdown of the British nuclear energy program resulted in a greatly reduced demand for the kind of work being done by the UKAEA. Pressures on government spending also reduced the funding available. Reluctant to merely disband a quality scientific research organisation, UKAEA was required to divert its research effort to the solving of scientific problems for industry by providing paid consultancy or services. UKAEA was ordered to operate on a Trading Fund basis, i.e. to account for itself financially as though it was a private corporation, while remaining fully government owned. After several years of transition, UKAEA was divided in the early 1990s. UKAEA retained ownership of all land and infrastructure and of all nuclear facilities, and of businesses directly related to nuclear power. The remainder was privatised as AEA Technology and floated on the London Stock Exchange. Harwell Laboratory contained elements of both organisations, though the land and infrastructure was owned by UKAEA.

The name Atomic Energy Research Establishment was dropped at the same time, and the site became known as the Harwell International Business Centre. The adjacent site known as Chilton/Harwell Science Campus houses the Rutherford Appleton Laboratory, ISIS neutron source and Diamond Light Source. In 2007, both sites started to use the name Harwell Science and Innovation Campus.

Electron configuration

In atomic physics and quantum chemistry, electron configuration is the arrangement of electrons in an atom, molecule, or other physical structure.[1] Like other elementary particles, the electron is subject to the laws of quantum mechanics, and exhibits both particle-like and wave-like nature. Formally, the quantum state of a particular electron is defined by its wave function, a complex-valued function of space and time. According to the Copenhagen interpretation of quantum mechanics, the position of a particular electron is not well defined until an act of measurement causes it to be detected. The probability that the act of measurement will detect the electron at a particular point in space is proportional to the square of the absolute value of the wavefunction at that point.

Electrons are able to move from one energy level to another by emission or absorption of a quantum of energy, in the form of a photon. Because of the Pauli exclusion principle, no more than two electrons may exist in a given atomic orbital; therefore an electron may only leap to another orbital if there is a vacancy there.

Knowledge of the electron configuration of different atoms is useful in understanding the structure of the periodic table of elements. The concept is also useful for describing the chemical bonds that hold atoms together. In bulk materials this same idea helps explain the peculiar properties of lasers and semiconductors.

[edit] Shells and subshells
See also: Electron shell
Electron configuration was first conceived of under the Bohr model of the atom, and it is still common to speak of shells and subshells despite the advances in understanding of the quantum-mechanical nature of electrons. By definition, from the Pauli exclusion principle, an orbital can hold a maximum of two electrons. However in some cases there are several orbitals which have exactly the same energy (they are said to be degenerate), and these orbitals are counted together for the purposes of the electron configuration.

An electron shell is the set of atomic orbitals which share the same principal quantum number, n (the number before the letter in the orbital label): hence the 3s-orbital, the 3p-orbitals and the 3d-orbitals all form part of the third shell. An electron shell can accommodate 2n2 electrons, ie the first shell can accommodate 2 electrons, the second shell 8 electrons, the third shell 18 electrons, etc.

A subshell is the set of orbitals which have the same orbital label (ie, the same values for n and l). Hence the three 2p-orbitals form a subshell, which can accommodate six electrons, as do the three 4p-orbitals or the five 3d-orbitals. The number of electrons which can be placed in a subshell is given by 2(2l+1): that is two electrons in an "s" subshell, six electrons in a "p" subshell, ten electrons in a "d" subshell and fourteen electrons in an "f" subshell.

The numbers of electrons that can occupy each shell and each subshell arise from the equations of quantum mechanics,[2] in particular the Pauli exclusion principle, which states that no two electrons in the same atom can have the same values of the four quantum numbers.[3]


[edit] Notation
See also: Atomic orbital
Physicists and chemists use a standard notation to describe the electron configurations of atoms and molecules. For atoms, the notation consists of a string of atomic orbital labels (eg, 1s, 3d, 4f) with the number of electrons assigned to each orbital (or set of orbitals sharing the same label) placed as a superscript. For example, hydrogen has one electron in the s-orbital of the first shell, so its configuration is written 1s1. Lithium has two electrons in the 1s-subshell and one in the (higher-energy) 2s-subshell, so its configuration is written 1s2 2s1 (pronounced "one-ess-two, two-ess-one"). Phosphorus (atomic number 15), is as follows: 1s2 2s2 2p6 3s2 3p3.

For atoms with many electrons, this notation can become lengthy and so an abbreviated notation is used, noting that the first few subshells are identical to those of one or another of the noble gases. Phosphorus, for instance, differs from neon (1s2 2s2 2p6) only by the presence of a third shell. Thus, the electron configuration of neon is pulled out, and phosphorus is written as follows: [Ne] 3s2 3p3. This convention is useful as it is the electrons in the outermost shell which most determine the chemistry of the element.

The order of writing the orbitals is not completely fixed: some sources group all orbitals with the same value of n together, while other sources (as here) follow the order given by Madelung's rule. Hence the electron configuration of iron can be written as [Ar] 3d6 4s2 (keeping the 3d-electrons with the 3s- and 3p-electrons which are implied by the configuration of argon) or as [Ar] 4s2 3d6 (following the Aufbau principle, see below).

The superscript 1 for a singly-occupied orbital is not compulsory.[4] It is quite common to see the letters of the orbital labels (s, p, d, f) written in an italic or slanting typeface, although the International Union of Pure and Applied Chemistry (IUPAC) recommends a normal typeface (as used here). The choice of letters originates from a now-obsolete system of categorizing spectral lines as "sharp", "principal", "diffuse" and "fine", based on their observed fine structure: their modern usage indicates orbitals with an . azimuthal quantum number, l, of 0, 1, 2 or 3 respectively. After "f", the sequence continues alphabetically "g", "h", "i"… (l = 4, 5, 6…), although orbitals of these types are rarely required.

The electron configurations of molecules are written in a similar way, except that molecular orbital labels are used instead of atomic orbital labels (see below).


[edit] History
Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the electronic structure of the atom.[5] His proposals were based on the then current Bohr model of the atom, in which the electron shells were orbits at a fixed distance from the nucleus. Bohr's original configurations would seem strange to a present-day chemist: sulfur was given as 2.4.4.6 instead of 1s2 2s2 2p6 3s2 3p4.

The following year, E. C. Stoner incorporated Sommerfeld's third quantum number into the description of electron shells, and correctly predicted the shell structure of sulfur to be 2.8.6.[6] However neither Bohr's system nor Stoner's could correctly describe the changes in atomic spectra in a magnetic field (the Zeeman effect).

Bohr was well aware of this shortcoming (and others), and had written to his friend Wolfgang Pauli to ask for his help in saving quantum theory (the system now known as "old quantum theory"). Pauli realized that the Zeeman effect must be due only to the outermost electrons of the atom, and was able to reproduce Stoner's shell structure, but with the correct structure of subshells, by his inclusion of a fourth quantum number and his exclusion principle (1925):[7]

It should be forbidden for more than one electron with the same value of the main quantum number n to have the same value for the other three quantum numbers k [l], j [ml] and m [ms].

The Schrödinger equation, published in 1926, gave three of the four quantum numbers as a direct consequence of its solution for the hydrogen atom:[2] this solution yields the atomic orbitals which are shown today in textbooks of chemistry (and above). The examination of atomic spectra allowed the electron configurations of atoms to be determined experimentally, and led to an empirical rule (known as Madelung's rule (1936),[8] see below) for the order in which atomic orbitals are filled with electrons.


[edit] Aufbau principle
The Aufbau principle (from the German Aufbau, "building up, construction") was an important part of Bohr's original concept of electron configuration. It may be stated as:[9]

a maximum of two electrons are put into orbitals in the order of increasing orbital energy: the lowest-energy orbitals are filled before electrons are placed in higher-energy orbitals.

The approximate order of filling of atomic orbitals, following the arrows.The principle works very well (for the ground states of the atoms) for the first 18 elements, then increasingly less well for the following 100 elements. The modern form of the Aufbau principle describes an order of orbital energies given by Madelung's rule, first stated by Erwin Madelung in 1936.[8][10]

Orbitals are filled in the order of increasing n+l;
Where two orbitals have the same value of n+l, they are filled in order of increasing n.
This give the following order for filling the orbitals:

1s 2s 2p 3s 3p 4s 3d 4p 5s 4d 5p 6s 4f 5d 6p 7s 5f 6d 7p
The Aufbau principle can be applied, in a modified form, to the protons and neutrons in the atomic nucleus, as in the shell model of nuclear physics.


[edit] The periodic table
The form of the periodic table is closely related to the electron configuration of the atoms of the elements. For example, all the elements of group 2 have an electron configuration of [E] ns2 (where [E] is an inert gas configuration), and have notable similarities in their chemical properties. The outermost electron shell is often referred to as the "valence shell" and (to a first approximation) determines the chemical properties. It should be remembered that the similarities in the chemical properties were remarked more than a century before the idea of electron configuration,[11] It is not clear how far Madelung's rule explains (rather than simply describes) the periodic table,[12] although some properties (such as the common +2 oxidation state in the first row of the transition metals) would obviously be different with a different order of orbital filling.


[edit] Shortcomings of the Aufbau principle
The Aufbau principle rests on a fundamental postulate that the order of orbital energies is fixed, both for a given element and between different elements: neither of these is true (although they are approximately true enough for the principle to be useful). It considers atomic orbitals as "boxes" of fixed energy into which can be placed two electrons and no more. However the energy of an electron "in" an atomic orbital depends on the energies of all the other electrons of the atom (or ion, or molecule, etc.). There are no "one-electron solutions" for systems of more than one electron, only a set of many-electron solutions which cannot be calculated exactly[13] (although there are mathematical approximations available, such the Hartree–Fock method).

The fact that the Aufbau principle is based on an approximation can be seen from the fact that there is an almost-fixed filling order at all, that, within a given shell, the s-orbitals are always filled before the p-orbitals. In a hydrogen-like atom, which only has one electron, the s-orbitals and the p-orbitals of the same shell have exactly the same energy (in the absence of an external electric or magnetic field).


[edit] Ionization of the transition metals
The naive application of the Aufbau principle leads to a well-known paradox (or apparent paradox) in the basic chemistry of the transition metals. Potassium and calcium appear in the periodic table before the transition metals, and have electron configurations [Ar] 4s1 and [Ar] 4s2 respectively, ie the 4s-orbital is filled before the 3d-orbital. This is in line with Madelung's rule, as the 4s-orbital has n+l = 4 (n = 4, l = 0) while the 3d-orbital has n+l = 5 (n = 3, l = 2). However, chromium and copper have electron configurations [Ar] 3d5 4s1 and [Ar] 3d10 4s1 respectively, ie one electron has passed from the 4s-orbital to a 3d-orbital to generate a half-filled or filled subshell. In this case, the usual explanation is that "half-filled or completely-filled subshells are particularly stable arrangements of electrons".

The apparent paradox arises when electrons are removed from the transition metal atoms to form ions. The first electrons to be ionized come not from the 3d-orbital, as one would expect if it were "higher in energy", but from the 4s-orbital. The same is true when chemical compounds are formed. Chromium hexacarbonyl can be described as a chromium atom (not ion, it is in the oxidation state 0) surrounded by six carbon monoxide ligands: it is diamagnetic, and the electron configuration of the central chromium atom is described as 3d6, ie the electron which was in the 4s-orbital in the free atom has passed into a 3d-orbital on forming the compound. This interchange of electrons between 4s and 3d is universal among the first series of the transition metals.[14]

The phenomenon is only paradoxical if it is assumed that the energies of atomic orbitals are fixed and unaffected by the presence of electrons in other orbitals. If that were the case, the 3d-orbital would have the same energy as the 3p-orbital, as it does in hydrogen, yet it clearly doesn't. There is no special reason why the Fe2+ ion should have the same electron configuration as the chromium atom, given that iron has two more protons in its nucleus than chromium and that the chemistry of the two species is very different. When care is taken to compare "like with like", the paradox disappears.[15]


[edit] Other exceptions to Madelung's rule
There are several more exceptions to Madelung's rule among the heavier elements, and it is more and more difficult to resort to simple explanations such as the stability of half-filled subshells. It is possible to predict most of the exceptions by Hartree–Fock calculations,[16] which are an approximate method for taking account of the effect of the other electrons on orbital energies. For the heavier elements, it is also necessary to take account of the effects of Special Relativity on the energies of the atomic orbitals, as the inner-shell electrons are moving at speeds approaching the speed of light. In general, these relativistic effects[17] tend to decrease the energy of the s-orbitals in relation to the other atomic orbitals.[18]

Applications
The most widespread application of electron configurations is in the rationalization of chemical properties, in both inorganic and organic chemistry. In effect, electron configurations, along with some simplified form of molecular orbital theory, have become the modern equivalent of the valence concept, describing the number and type of chemical bonds that an atom can be expected to form.

This approach is taken further in calculational chemistry, which typically attempts to make quantitative estimates of chemical properties. For many years, most such calculations relied upon the "linear combination of atomic orbitals" (LCAO) approximation, using ever larger an more complex basis set of atomic orbitals as the starting point. The last step in such a calculation is the assignment of electrons among the molecular orbitals according to the Aufbau principle. Not all methods in calculational chemistry rely on electron configuration: density functional theory (DFT) is an important example of a method which discards the model.

A fundamental application of electron configurations is in the interpretation of atomic spectra. In this case, it is necessary to convert the electron configuration into one or more term symbols, which describe the different energy levels available to an atom. Term symbols can be calculated for any electron configuration, not just the ground-state configuration listed in tables, although not all the energy levels are observed in practice. It is through the analysis of atomic spectra that the ground-state electron configurations of the elements were experimentally determined.


[edit] Electron configuration in molecules
In molecules, the situation becomes more complex, as each molecule has a different orbital structure. The molecular orbitals are labelled according to their symmetry,[19] rather than the atomic orbital labels used for atoms and monoatomic ions: hence, the electron configuration of the dioxygen molecule, O2, is 1σg2 1σu2 2σg2 2σu2 1πu4 3σg2 1πg2.[1] The term 1πg2 represents the two electrons in the two degenerate π*-orbitals (antibonding). From Hund's rules, these electrons have parallel spins in the ground state, and so dioxygen has a net magnetic moment (it is paramagnetic). The explanation of the paramagnetism of dioxygen was a major success for molecular orbital theory.


[edit] Electron configuration in solids
In a solid, the electron states become very numerous. They cease to be discrete, and effectively blend together into continuous ranges of possible states (an electron band). The notion of electron configuration ceases to be relevant, and yields to band theory.

Sunday, November 9, 2008

benz cars

List of Mercedes-Benz cars

[edit] Daimler and Benz vehicles
Benz Patent Motorwagen 1886
Daimler Motor Car 1886
Benz Velo 1894
Mercedes 35 hp 1901
Mercedes Simplex 1902
GP Mercedes 1908
Blitzen Benz 1909
Mercedes Grand Prix Racing Car 1914
Benz 10/30hp 1921

[edit] Mercedes-Benz vehicles

[edit] 1920s
K (Kurz-short) 1926
S 1927
SSK 1928
SS 1928
10/50hp "Stuttgart" 1929

[edit] 1930s
370S 1930's
170 Saloon 1931-1932
130 Saloon 1933
150 1934
W31 / G4 1934-1939 (6 wheels) [1] [2]
170V 1935-1950
770 (Grosser) 1930-1943 in two series:
W07 1930-1938
W150 1938-1943
500K 1935-1936
260D 1936-1940
320 Saloon 1937
W125 1937
230 1938
W163 GP 1939

[edit] 1950s
180 1957-1962
190 1959-1963
W196 GP 1954
220 1954
300S mid 50's
300SL 1954-1963 in two series:
Gullwing Coupe 1954-1957
Roadster 1958-1963
190SL 1955-1963

[edit] 1960s
190c 1962-1965
230 1965-1966
200 1966-1968
200D 1966-1967
230 1968-1972
250 1968-1972

[edit] 1970s
280 1972-1976
280C 1973-1976
300D 1975-1976
G-Class 1979-
SL-Class 1957-
W113 1963-1971
R107 1972-1989

[edit] 1980s
190 1982-1993
300D 1977-1985
300CD 1978-1985
300SD 1981-1985
300SDL 1986-1987
300TD 1978-1985
350SDL 1990-1991
500SE 1984-1991
500SEC 1984-1991
500SEL 1984-1991

[edit] 1990s
A-Class 1997-
C-Class 1993-
CLK-Class 1998-
E-Class 1995-
SL-Class 1989-2001
Vaneo 1997-2004

[edit] 2000s
Mercedes-Benz B-Class 2005-
Mercedes-Benz C-Class
W203 2000-2007
W204 2007-
Mercedes-Benz CL-Class
W215 2000-2006
W216 2007-
Mercedes-Benz CLK-Class 2000-
Mercedes-Benz CLS-Class 2004-
Mercedes-Benz E-Class
W210 1995-2002
W211 2003-
Mercedes-Benz G-Class 2000-
Mercedes-Benz GL-Class
X164 2007-
Mercedes-Benz M-Class
W163 1998-2005
W164 2006-
Mercedes-Benz S-Class
W220 1999-2005
W221 2006-
Mercedes-Benz SLK-Class
R170 1998-2004
R171 2005-
Mercedes-Benz SL-Class 2001-
R230
Mercedes-Benz SLR-McLaren 2003-
722 Edition 2006-
Retrieved from "http://en.wikipedia.org/wiki/List_of_Mercedes-Benz_cars"


Tuesday, November 4, 2008

Automobile


An automobile or motor car is a wheeled motor vehicle for transporting passengers, which also carries its own engine or motor. Most definitions of the term specify that automobiles are designed to run primarily on roads, to have seating for one to eight people, to typically have four wheels, and to be constructed principally for the transport of people rather than goods.[1] However, the term is far from precise because there are many types of vehicles that do similar tasks.
Automobile comes via the French language, from the Greek language by combining auto [self] with mobilis [moving]; meaning a vehicle that moves itself, rather than being pulled or pushed by a separate animal or another vehicle. The alternative name car is believed to originate from the Latin word carrus or carrum [wheeled vehicle], or the Middle English word carre [cart] (from Old North French), and karros; a Gallic wagon.[2][3]
As of 2002, there were 590 million passenger cars worldwide (roughly one car per eleven people).[4]

History
Main article: History of the automobile
Although Nicolas-Joseph Cugnot is often credited with building the first self-propelled mechanical vehicle or automobile in about 1769 by adapting an existing horse-drawn vehicle, this claim is disputed by some, who doubt Cugnot's three-wheeler ever ran or was stable. Others claim Ferdinand Verbiest, a member of a Jesuit mission in China, built the first steam-powered vehicle around 1672 which was of small scale and designed as a toy for the Chinese Emperor that was unable to carry a driver or a passenger, but quite possibly, was the first working steam-powered vehicle ('auto-mobile').[5][6] What is not in doubt is that Richard Trevithick built and demonstrated his Puffing Devil road locomotive in 1801, believed by many to be the first demonstration of a steam-powered road vehicle although it was unable to maintain sufficient steam pressure for long periods, and would have been of little practical use.
In Russia, in the 1780s, Ivan Kulibin developed a human-pedalled, three-wheeled carriage with modern features such as a flywheel, brake, gear box, and bearings; however, it was not developed further.[7]
François Isaac de Rivaz, a Swiss inventor, designed the first internal combustion engine, in 1806, which was fueled by a mixture of hydrogen and oxygen and used it to develop the world's first vehicle, albeit rudimentary, to be powered by such an engine. The design was not very successful, as was the case with others such as Samuel Brown, Samuel Morey, and Etienne Lenoir with his hippomobile, who each produced vehicles (usually adapted carriages or carts) powered by clumsy internal combustion engines.[8]
In November 1881 French inventor Gustave Trouvé demonstrated a working three-wheeled automobile that was powered by electricity. This was at the International Exhibition of Electricity in Paris.[9]
Although several other German engineers (including Gottlieb Daimler, Wilhelm Maybach, and Siegfried Marcus) were working on the problem at about the same time, Karl Benz generally is acknowledged as the inventor of the modern automobile.[8]
An automobile powered by his own four-stroke cycle gasoline engine was built in Mannheim, Germany by Karl Benz in 1885 and granted a patent in January of the following year under the auspices of his major company, Benz & Cie., which was founded in 1883. It was an integral design, without the adaptation of other existing components and including several new technological elements to create a new concept. This is what made it worthy of a patent. He began to sell his production vehicles in 1888.

Karl Benz

A photograph of the original Benz Patent Motorwagon, first built in 1885 and awarded the patent for the concept
In 1879 Benz was granted a patent for his first engine, which had been designed in 1878. Many of his other inventions made the use of the internal combustion engine feasible for powering a vehicle.
His first Motorwagon was built in 1885 and he was awarded the patent for its invention as of his application on January 29, 1886. Benz began promotion of the vehicle on July 3, 1886 and approximately 25 Benz vehicles were sold between 1888 and 1893, when his first four-wheeler was introduced along with a model intended for affordability. They also were powered with four-stroke engines of his own design. Emile Roger of France, already producing Benz engines under license, now added the Benz automobile to his line of products. Because France was more open to the early automobiles, initially more were built and sold in France through Roger than Benz sold in Germany.
In 1896, Benz designed and patented the first internal-combustion flat engine, called a boxermotor in German. During the last years of the nineteenth century, Benz was the largest automobile company in the world with 572 units produced in 1899 and because of its size, Benz & Cie., became a joint-stock company.
Daimler and Maybach founded Daimler Motoren Gesellschaft (Daimler Motor Company, DMG) in Cannstatt in 1890 and under the brand name, Daimler, sold their first automobile in 1892, which was a horse-drawn stagecoach built by another manufacturer, that they retrofitted with an engine of their design. By 1895 about 30 vehicles had been built by Daimler and Maybach, either at the Daimler works or in the Hotel Hermann, where they set up shop after falling out with their backers. Benz and the Maybach and Daimler team seem to have been unaware of each other's early work. They never worked together because by the time of the merger of the two companies, Daimler and Maybach were no longer part of DMG.
Daimler died in 1900 and later that year, Maybach designed an engine named Daimler-Mercedes, that was placed in a specially-ordered model built to specifications set by Emil Jellinek. This was a production of a small number of vehicles for Jellinek to race and market in his country. Two years later, in 1902, a new model DMG automobile was produced and the model was named Mercedes after the Maybach engine which generated 35 hp. Maybach quit DMG shortly thereafter and opened a business of his own. Rights to the Daimler brand name were sold to other manufacturers.
Karl Benz proposed co-operation between DMG and Benz & Cie. when economic conditions began to deteriorate in Germany following the First World War, but the directors of DMG refused to consider it initially. Negotiations between the two companies resumed several years later when these conditions worsened and, in 1924 they signed an Agreement of Mutual Interest, valid until the year 2000. Both enterprises standardized design, production, purchasing, and sales and they advertised or marketed their automobile models jointly—although keeping their respective brands.
On June 28, 1926, Benz & Cie. and DMG finally merged as the Daimler-Benz company, baptizing all of its automobiles Mercedes Benz as a brand honoring the most important model of the DMG automobiles, the Maybach design later referred to as the 1902 Mercedes-35hp, along with the Benz name. Karl Benz remained a member of the board of directors of Daimler-Benz until his death in 1929 and at times, his two sons participated in the management of the company as well.
In 1890, Emile Levassor and Armand Peugeot of France began producing vehicles with Daimler engines and so laid the foundation of the automobile industry in France.
The first design for an American automobile with a gasoline internal combustion engine was drawn in 1877 by George Selden of Rochester, New York, who applied for a patent for an automobile in 1879, but the patent application expired because the vehicle was never built and proved to work (a requirement for a patent). After a delay of sixteen years and a series of attachments to his application, on November 5, 1895, Selden was granted a United States patent (U.S. Patent 549,160 ) for a two-stroke automobile engine, which hindered, more than encouraged, development of automobiles in the United States. His patent was challenged by Henry Ford and others, and overturned in 1911.
In Britain there had been several attempts to build steam cars with varying degrees of success with Thomas Rickett even attempting a production run in 1860.[10] Santler from Malvern is recognized by the Veteran Car Club of Great Britain as having made the first petrol-powered car in the country in 1894[11] followed by Frederick William Lanchester in 1895 but these were both one-offs.[11] The first production vehicles in Great Britain came from the Daimler Motor Company, a company founded by Harry J. Lawson in 1896 after purchasing the right to use the name of the engines. Lawson's company made its first automobiles in 1897 and they bore the name Daimler.[11]
In 1892, German engineer Rudolf Diesel was granted a patent for a "New Rational Combustion Engine". In 1897 he built the first Diesel Engine.[8] Steam-, electric-, and gasoline-powered vehicles competed for decades, with gasoline internal combustion engines achieving dominance in the 1910s.
Although various pistonless rotary engine designs have attempted to compete with the conventional piston and crankshaft design, only Mazda's version of the Wankel engine has had more than very limited success.

Production

Ransom E. Olds.
The large-scale, production-line manufacturing of affordable automobiles was debuted by Ransom Olds at his Oldsmobile factory in 1902. This concept was greatly expanded by Henry Ford, beginning in 1914.
As a result, Ford's cars came off the line in fifteen minute intervals, much faster than previous methods, increasing production by seven to one (requiring 12.5 man-hours before, 1 hour 33 minutes after), while using less manpower.[12] It was so successful, paint became a bottleneck. Only Japan black would dry fast enough, forcing the company to drop the variety of colors available before 1914, until fast-drying Duco lacquer was developed in 1926. This is the source of Ford's apocryphal remark, "any color as long as it's black".[12] In 1914, an assembly line worker could buy a Model T with four months' pay.[12]

Portrait of Henry Ford (ca. 1919)
Ford's complex safety procedures—especially assigning each worker to a specific location instead of allowing them to roam about—dramatically reduced the rate of injury. The combination of high wages and high efficiency is called "Fordism," and was copied by most major industries. The efficiency gains from the assembly line also coincided with the economic rise of the United States. The assembly line forced workers to work at a certain pace with very repetitive motions which led to more output per worker while other countries were using less productive methods.
In the automotive industry, its success was dominating, and quickly spread worldwide seeing the founding of Ford France and Ford Britain in 1911, Ford Denmark 1923, Ford Germany 1925; in 1921, Citroen was the first native European manufacturer to adopt the production method. Soon, companies had to have assembly lines, or risk going broke; by 1930, 250 companies which did not, had disappeared.[12]
Development of automotive technology was rapid, due in part to the hundreds of small manufacturers competing to gain the world's attention. Key developments included electric ignition and the electric self-starter (both by Charles Kettering, for the Cadillac Motor Company in 1910-1911), independent suspension, and four-wheel brakes.

Ford Model T, 1927, regarded as the first affordable American automobile
Since the 1920s, nearly all cars have been mass-produced to meet market needs, so marketing plans often have heavily influenced automobile design. It was Alfred P. Sloan who established the idea of different makes of cars produced by one company, so buyers could "move up" as their fortunes improved.
Reflecting the rapid pace of change, makes shared parts with one another so larger production volume resulted in lower costs for each price range. For example, in the 1930s, LaSalles, sold by Cadillac, used cheaper mechanical parts made by Oldsmobile; in the 1950s, Chevrolet shared hood, doors, roof, and windows with Pontiac; by the 1990s, corporate drivetrains and shared platforms (with interchangeable brakes, suspension, and other parts) were common. Even so, only major makers could afford high costs, and even companies with decades of production, such as Apperson, Cole, Dorris, Haynes, or Premier, could not manage: of some two hundred American car makers in existence in 1920, only 43 survived in 1930, and with the Great Depression, by 1940, only 17 of those were left.[12]
In Europe much the same would happen. Morris set up its production line at Cowley in 1924, and soon outsold Ford, while beginning in 1923 to follow Ford's practise of vertical integration, buying Hotchkiss (engines), Wrigley (gearboxes), and Osberton (radiators), for instance, as well as competitors, such as Wolseley: in 1925, Morris had 41% of total British car production. Most British small-car assemblers, from Abbey to Xtra had gone under. Citroen did the same in France, coming to cars in 1919; between them and other cheap cars in reply such as Renault's 10CV and Peugeot's 5CV, they produced 550,000 cars in 1925, and Mors, Hurtu, and others could not compete.[12] Germany's first mass-manufactured car, the Opel 4PS Laubfrosch (Tree Frog), came off the line at Russelsheim in 1924, soon making Opel the top car builder in Germany, with 37.5% of the market.[12]
See also: Automotive industry

Fuel and propulsion technologies

Auto rickshaws in New Delhi run on Compressed Natural Gas
See also: Alternative fuel vehicle
Most automobiles in use today are propelled by gasoline (also known as petrol) or diesel internal combustion engines, which are known to cause air pollution and are also blamed for contributing to climate change and global warming.[13] Increasing costs of oil-based fuels, tightening environmental laws and restrictions on greenhouse gas emissions are propelling work on alternative power systems for automobiles. Efforts to improve or replace existing technologies include the development of hybrid vehicles, and electric and hydrogen vehicles which do not release pollution into the air.

Petroleum fuels
Main article: Petroleum fuel engine

Diesel
Main article: Diesel engine
Diesel-engined cars have long been popular in Europe with the first models being introduced in the 1930s by Mercedes Benz and Citroen. The main benefit of diesel engines is a 50% fuel burn efficiency compared with 27%[14] in the best gasoline engines. A down-side of the diesel is the presence in the exhaust gases of fine soot particulates and manufacturers are now starting to fit filters to remove these. Many diesel-powered cars can also run with little or no modifications on 100% biodiesel.

Gasoline
Main article: Petrol engine

2007 Mark II (BMW) Mini Cooper
Gasoline engines have the advantage over diesel in being lighter and able to work at higher rotational speeds and they are the usual choice for fitting in high-performance sports cars. Continuous development of gasoline engines for over a hundred years has produced improvements in efficiency and reduced pollution. The carburetor was used on nearly all road car engines until the 1980s but it was long realised better control of the fuel/air mixture could be achieved with fuel injection. Indirect fuel injection was first used in aircraft engines from 1909, in racing car engines from the 1930s, and road cars from the late 1950s.[14] Gasoline Direct Injection (GDI) is now starting to appear in production vehicles such as the 2007 (Mark II) BMW Mini. Exhaust gases are also cleaned up by fitting a catalytic converter into the exhaust system. Clean air legislation in many of the car industries most important markets has made both catalysts and fuel injection virtually universal fittings. Most modern gasoline engines also are capable of running with up to 15% ethanol mixed into the gasoline - older vehicles may have seals and hoses that can be harmed by ethanol. With a small amount of redesign, gasoline-powered vehicles can run on ethanol concentrations as high as 85%. 100% ethanol is used in some parts of the world (such as Brazil), but vehicles must be started on pure gasoline and switched over to ethanol once the engine is running. Most gasoline engined cars can also run on LPG with the addition of an LPG tank for fuel storage and carburetion modifications to add an LPG mixer. LPG produces fewer toxic emissions and is a popular fuel for fork lift trucks that have to operate inside buildings.

The hydrogen powered FCHV (Fuel Cell Hybrid Vehicle) was developed by Toyota in 2005

Biofuels
Main articles: Biofuel, Ethanol fuel, and biogasoline
Ethanol, other alcohol fuels (biobutanol) and biogasoline have widespread use an automotive fuel. Most alcohols have less energy per liter than gasoline and are usually blended with gasoline. Alcohols are used for a variety of reasons - to increase octane, to improve emissions, and as an alternative to petroleum based fuel, since they can be made from agricultural crops. Brazil's ethanol program provides about 20% of the nation's automotive fuel needs, as a result of the mandatory use of E25 blend of gasoline throughout the country, 3 million cars that operate on pure ethanol, and 6 million dual or flexible-fuel vehicles sold since 2003.[15] that run on any mix of ethanol and gasoline. The commercial success of "flex" vehicles, as they are popularly known, have allowed sugarcane based ethanol fuel to achieve a 50% market share of the gasoline market by April 2008.[16][17][18]

Electric
Main articles: Battery electric vehicle, Hybrid vehicle, and Plug-in hybrid

The Henney Kilowatt, the first modern (transistor-controlled) electric car.

2007 Tesla electric powered Roadster

Tata/MDI OneCAT Air Car

A CNG powered high-floor Neoplan AN440A, run on Compressed Natural Gas
The first electric cars were built around 1832, well before internal combustion powered cars appeared.[19] For a period of time electrics were considered superior due to the silent nature of electric motors compared to the very loud noise of the gasoline engine. This advantage was removed with Hiram Percy Maxim's invention of the muffler in 1897. Thereafter internal combustion powered cars had two critical advantages: 1) long range and 2) high specific energy (far lower weight of petrol fuel versus weight of batteries). The building of battery electric vehicles that could rival internal combustion models had to wait for the introduction of modern semiconductor controls and improved batteries. Because they can deliver a high torque at low revolutions electric cars do not require such a complex drive train and transmission as internal combustion powered cars. Some post-2000 electric car designs such as the Venturi Fétish are able to accelerate from 0-60 mph (96 km/h) in 4.0 seconds with a top speed around 130 mph (210 km/h). Others have a range of 250 miles (400 km) on the EPA highway cycle requiring 3-1/2 hours to completely charge.[20] Equivalent fuel efficiency to internal combustion is not well defined but some press reports give it at around 135 miles per US gallon (57 km/l/162 mpg-imp).

Steam
Main article: steam car
Steam power, usually using an oil- or gas-heated boiler, was also in use until the 1930s but had the major disadvantage of being unable to power the car until boiler pressure was available (although the newer models could achieve this in well under a minute). It has the advantage of being able to produce very low emissions as the combustion process can be carefully controlled. Its disadvantages include poor heat efficiency and extensive requirements for electric auxiliaries.[21]

Air
Main article: Compressed-air car
A compressed air car is an alternative fuel car that uses a motor powered by compressed air. The car can be powered solely by air, or by air combined (as in a hybrid electric vehicle) with gasoline/diesel/ethanol or electric plant and regenerative braking. Instead of mixing fuel with air and burning it to drive pistons with hot expanding gases; compressed air cars use the expansion of compressed air to drive their pistons. Several prototypes are available already and scheduled for worldwide sale by the end of 2008. Companies releasing this type of car include Tata Motors and Motor Development International (MDI).

Gas turbine
In the 1950s there was a brief interest in using gas turbine engines and several makers including Rover and Chrysler produced prototypes. In spite of the power units being very compact, high fuel consumption, severe delay in throttle response, and lack of engine braking meant no cars reached production.

Rotary (Wankel) engines
Rotary Wankel engines were introduced into road cars by NSU with the Ro 80 and later were seen in the Citroën GS Birotor and several Mazda models. In spite of their impressive smoothness, poor reliability and fuel economy led to them largely disappearing. Mazda, beginning with the R100 then RX-2, has continued research on these engines, overcoming most of the earlier problems with the RX-7 and RX-8.

Rocket and jet cars
A rocket car holds the record in drag racing. However, the fastest of those cars are used to set the Land Speed Record, and are propelled by propulsive jets emitted from rocket, turbojet, or more recently and most successfully turbofan engines. The ThrustSSC car using two Rolls-Royce Spey turbofans with reheat was able to exceed the speed of sound at ground level in 1997.

Wednesday, October 22, 2008

How To Play Soccer - The Skills


Playing soccer is a physically demanding game and requires a combination on tenacity, fitness, guile, mental toughness and skill.
Skill is the number one aspect that players, especially young players should focus on. Being technically proficient will ensure that you can play better soccer, and with a team of equally skilled players your team will be able to use other aspects of the game like tactics and fitness to better effect and improve your team even further.
The basic skills that are required for soccer player are
* Ball Control* Dribbling* Passing* Shooting* Heading* Defending* Goalkeeping
Ball Control
Ball control is about the ability to receive the ball, from any angle or height and get the ball "under control" as quickly as possible. Typically this means getting the ball on the ground and in a position, ready for your next move. It is of little use getting the ball on the ground on your left side, if you wish to make a pass out to the right. All parts of the body can be used to get the ball under control, except for the hand and arms, and generally it is best to use the surface with the greatest body area to control the ball, so if using the feet to control the ball, you should get behind the ball and use the inside of the foot to cushion the ball, and direct the ball to the best position, ready for your next move, which may be to dribble or to pass.
Dribbling
Dribbling with ball is an exciting part of the game. Dribbling is running with the ball, and keeping the ball as close as is needed to make sure that the opposition cannot get the ball from you. So in a congested area where there are a lot of players, the ball must be kept very close, and a variety of fakes and feints must used to try and get more space and elude the defenders. In areas less crowded, the ball can be played further away from the body, allowing you to run faster into the oppositions territory.
Passing
There are a number of ways to pass the ball to a team mate, including using the inside of the foot, the instep or top of the foot, and the outside of the foot. However, passing using the inside of the foot (the push pass) is by far the most common method of passing the ball as it is the easiest and most accurate pass especially over short distances of 5 - 20 yards, and is the one that should be practiced the most.
Shooting
A shot not taken is a shot missed. In other words players should shoot whenever they have a sight on goal and can make the distance. Shooting is all about kicking the ball where the goalkeeper isn't, and really any part of the foot can be used, however, there are a few guidelines. 1. Shoot towards the far post and 2. Shoot low. 3. Accuracy before power. The reasons are that these shots are harder for the keeper to save and if the shot is going wide it is possible for another team mate to slot home a wayward shot, also if the shot is on target, it has a chance to go in.
Heading
There are two types of headers, attacking and defensive. Attacking headers are like shots and should generally be aimed down to the ground, again allowing for a team mate or deflection to score a goal for a wayward header. Defensive headers on the other hand should be aimed high and either wide, or out of play to allow the other defenders to adjust to the attack and mark the opposition.
Defending
The first role of a defender is to ensure that the opposition cannot shoot, and this is generally by just getting in the way or tackling the attacker. Good defenders not only get in the way, but can dictate the way that an attacker will have to play the ball, and that is generally to make the attacker play away from the goal. Defenders must also learn how to delay attackers, particularly when they are out numbered. Delaying the attackers will give your team mates enough time to recover and mark the opposition players.
Goalkeeping
Often neglected at training, goalkeepers need to have the skills of the outfield player and in addition they require other skills such as diving, jumping and good ball handling skills. A goalkeepers judgment on whether to catch, punch or palm a shot can make the difference between a win and a loss. Goalkeepers also need to be able to read the game and offer advice to team mates as goalkeepers are the only players to have a full vision of the game. Goalkeepers also need to be brave and fearless in their commitment.

Monday, August 11, 2008

NUCLEAR SCIENCE AND TECHNOLOGY



The Atom
Atoms are the smallest units of matter that have all the characteristics of an element. All matter (solid, fluid or gaseous) consists of elements.For example, an iron atom is the smallest unit of iron that has all the characteristics of the element iron. A helium atom (right) is the smallest unit of helium that has all the characteristics of the element helium.Atoms are the building blocks of everything in the universe.

NUCLEAR TECHNOLOGY
Nuclear technology is technology that involves the
reactions of atomic nuclei. It has found applications from smoke detectors to nuclear reactors, and from gun sights to nuclear weapons. There is a great deal of public concern about its possible implications, and every application of nuclear technology is reviewed with care.[edit] DiscoveryIn 1896, Henri Becquerel was investigating phosphorescence in uranium salts when he discovered a new phenomenon which came to be called radioactivity.[1] He, Pierre Curie and Marie Curie began investigating the phenomenon. In the process they isolated the element radium, which is highly radioactive. They discovered that radioactive materials produce intense, penetrating rays of several distinct sorts, which they called alpha rays, beta rays and gamma rays. Some of these kinds of radiation could pass through ordinary matter, and all of them could cause damage in large amounts - all the early researchers received various radiation burns, much like sunburn, and thought little of it.The new phenomenon of radioactivity was seized upon by the manufacturers of quack medicine (as had the discoveries of electricity and magnetism, earlier), and any number of patent medicines and treatments involving radioactivity were put forward. Gradually it came to be realized that the radiation produced by radioactive decay was ionizing radiation, and that quantities too small to burn presented a severe long-term hazard. Many of the scientists working on radioactivity died of cancer as a result of their exposure. Radioactive patent medicines mostly disappeared, but other applications of radioactive materials persisted, such as the use of radium salts to produce glowing dials on meters.As the atom came to be better understood, the nature of radioactivity became clearer; some atomic nuclei are unstable, and can decay releasing energy in the form of gamma rays (high-energy photons), alpha particles (a pair of protons and a pair of neutrons) and beta particles, high-energy electrons.[edit] Nuclear fissionRadioactivity is generally a slow and difficult process to control, and is unsuited to building a weapon. However, other nuclear reactions are possible. In particular, a sufficiently unstable nucleus can undergo nuclear fission, breaking into two smaller nuclei and releasing energy and some fast neutrons. This neutron could, if captured by another nucleus, cause that nucleus to undergo fission as well. The process could then continue in a nuclear chain reaction. Such a chain reaction could release a vast amount of energy in a short amount of time. When discovered on the eve of World War II, it led multiple countries to begin programs investigating the possibility of constructing an atomic bomb—a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. The Manhattan Project, run by the United States with the help of the United Kingdom and Canada, developed multiple fission weapons which were used against Japan in 1945. During the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate power.[edit] Nuclear fusionMain article: Timeline of nuclear fusionNuclear fusion technology was initially pursued only in theoretical stages during World War II, when scientists on the Manhattan Project (led by Edward Teller) investigated the possibility of using the great power of a fission reaction to ignite fusion reactions. It took until 1952 for the first full detonation of a hydrogen bomb to take place, so-called because it utilized reactions between deuterium and tritium, isotopes of hydrogen. Fusion reactions are much more energetic per unit mass of fusion material, but it is much more difficult to ignite a chain reaction than is fission.Research into the possibilities of using nuclear fusion for civilian power generation was begun during the 1940s as well. Technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world.[edit] Nuclear WeaponsThe design of a nuclear weapon is more complicated than it might seem; it is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. The construction of a nuclear weapon is also more difficult than it might seem, as no naturally occurring substance is sufficiently unstable for this process to occur. One isotope of uranium, namely uranium-235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium-238. Thus a complicated and difficult process of isotope separation must be performed to obtain uranium-235. Alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. Plutonium does not occur naturally, so it must be manufactured in a nuclear reactor. Ultimately, the Manhattan Project manufactured nuclear weapons based on each of these.The first atomic bomb was detonated in a test code-named "Trinity", near Alamogordo on July 16, 1945. After much debate on the morality of using such a horrifying weapon, two bombs were dropped on the Japanese cities Hiroshima and Nagasaki, and the Japanese surrender followed shortly.Several nations began nuclear weapons programs, developing ever more destructive bombs in an arms race to obtain what many called a nuclear deterrent. Nuclear weapons are the most destructive weapons known - the archetypal weapons of mass destruction. Throughout the Cold War, the opposing powers had huge nuclear arsenals, sufficient to kill hundreds of millions of people. Generations of people grew up under the shadow of nuclear devastation.However, the tremendous energy release in the detonation of a nuclear weapon also suggested the possibility of a new energy source.[edit] Nuclear PowerMain article: Nuclear powerCommercial nuclear power began in the early 1950s in the US, UK, and Soviet Union. The first commercial reactors were heavily based on either research reactors or military reactors. The first commercial nuclear reactor to go online in the US was the Shippingport Atomic Power Station in Western Pennsylvania.Some countries have banned all forms of nuclear power.[citation needed][edit] Types of nuclear reactionThis section may require cleanup to meet Wikipedia's quality standards.Please improve this article if you can. (August 2007)Most natural nuclear reactions fall under the heading of radioactive decay, where a nucleus is unstable and decays after a random interval. The most common processes by which this can occur are alpha decay, beta decay, and gamma decay. Under suitable circumstances, a large unstable nucleus can break into two smaller nuclei, undergoing nuclear fission.If these neutrons are captured by a suitable nucleus, they can trigger fission as well, leading to a chain reaction. A mass of radioactive material large enough (and in a suitable configuration) is called a critical mass. When a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. If there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. However, if the mass is critical only when the delayed neutrons are included, the reaction can be controlled, for example by the introduction or removal of neutron absorbers. This is what allows nuclear reactors to be built. Fast neutrons are not easily captured by nuclei; they must be slowed (slow neutrons), generally by collision with the nuclei of a neutron moderator, before they can be easily captured.If nuclei are forced to collide, they can undergo nuclear fusion. This process may release or absorb energy. When the resulting nucleus is lighter than that of iron, energy is normally released; when the nucleus is heavier than that of iron, energy is generally absorbed. This process of fusion occurs in stars, and results in the formation, in stellar nucleosynthesis, of the light elements, from lithium to calcium, as well as some formation of the heavy elements, beyond Iron and Nickel, which cannot be created by nuclear fusion, via neutron capture - the S-process. The remaining abundance of heavy elements - from Nickel to Uranium and beyond - is due to supernova nucleosynthesis, the R-process. Of course, these natural processes of astrophysics are not examples of nuclear technology. Because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. Hydrogen bombs obtain their enormous destructive power from fusion, but obtaining controlled fusion power has so far proved elusive. Controlled fusion can be achieved in particle accelerators; this is how many synthetic elements were produced. The Farnsworth-Hirsch Fusor is a device which can produce controlled fusion (and which can be built as a high-school science project), albeit at a net energy loss. It is sold commercially as a neutron source.The vast majority of everyday phenomena do not involve nuclear reactions. Most everyday phenomena only involve gravity and electromagnetism. Of the fundamental forces of nature, they are not the strongest, but the other two, the strong nuclear force and the weak nuclear force are essentially short-range forces so they do not play a role outside the atomic nucleus. Atomic nuclei are generally kept apart because they contain positive electrical charges and therefore repel each other, so in ordinary circumstances they cannot meet.[edit] Nuclear Accidents[edit] Three Mile island Incident (1979)The Three Mile Island incident, which ironically occurred two weeks after the release of the disaster film The China Syndrome greatly impacted the public's perception of nuclear power. Many human factors engineering improvements were made to American power plants in the wake of Three Mile Island's partial meltdown.[2][edit] Chernobyl Accident (1986)The Chernobyl accident in 1986 further alarmed the public about nuclear power. While design differences between the RBMK reactor used at Chernobyl and most western reactors virtually eliminate the possibility of such an accident occurring outside of the former Soviet Union, it is only recently that the general public in the United States has started to embrace nuclear energy.[edit] Examples of Nuclear Technology[edit] Nuclear PowerFurther information: Nuclear PowerNuclear power is a type of nuclear technology involving the controlled use of nuclear fission to release energy for work including propulsion, heat, and the generation of electricity. Nuclear energy is produced by a controlled nuclear chain reaction which creates heat—and which is used to boil water, produce steam, and drive a steam turbine. The turbine can be used for mechanical work and also to generate electricity.Currently nuclear power is used to propel aircraft carriers, icebreakers and submarines; and provides approximately 15.7% of the world's electricity (in 2004). The risk of radiation and cost have prohibited use of nuclear power in transport ships.[3][edit] Medical ApplicationsImaging - medical and dental x-ray imagers use of Cobalt-60 or other x-ray sources. Technetium-99m is used, attached to organic molecules, as radioactive tracer in the human body, before being excreted by the kidneys. Positron emitting nulceotides are used for high resolution, short time span imaging in applications known as Positron emission tomography.[edit] Industrial applicationsOil and Gas Exploration- Nuclear well logging is used to help predict the commercial viability of new or existing wells. The technology involves the use of a neutron or gamma-ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography.[1]Road Construction - Nuclear moisture/density gauges are used to determine the density of soils, asphalt, and concrete. Typically a Cesium-137 source is used.[edit] Commercial applicationsAn ionization smoke detector includes a tiny mass of radioactive americium-241, which is a source of alpha radiation. Tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. Luminescent exit signs use the same technology.[4][edit] Food Processing and AgricultureThe Radura logo, used to show a food has been treated with ionizing radiation.Food irradiation[5] is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. Further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re-hydration. Irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal (in this context 'ionizing radiation' is implied). As such it is also used on non-food items, such as medical hardware, plastics, tubes for gas-pipelines, hoses for floor-heating, shrink-foils for food packaging, automobile parts, wires and cables (isolation), tires, and even gemstones. Compared to the amount of food irradiated, the volume of those every-day applications is huge but not noticed by the consumer.The genuine effect of processing food by ionizing radiation relates to damages to the DNA, the basic genetic information for life. Microorganisms can no longer proliferate and continue their malignant or pathogen activities. Spoilage causing micro-organisms cannot continue their activities. Insects do not survive or become incapable of proliferation. Plants cannot continue the natural ripening or aging process. All these effects are beneficial to the consumer and the food industry, likewise.[5]It should be noted that the amount of energy imparted for effective food irradiation is low compared to cooking the same; even at a typical dose of 10 kGy most food, which is (with regard to warming) physically equivalent to water, would warm by only about 2.5 °C.The speciality of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization (hence the name) which cannot be achieved by mere heating. This is the reason for new beneficial effects, however at the same time, for new concerns. The treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. However, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar.Nuclear EnergyThe sun and stars are seemingly inexhaustible sources of energy. That energy is the result of nuclear reactions, in which matter is converted to energy. We have been able to harness that mechanism and regularly use it to generate power. Presently, nuclear energy provides for approximately 16% of the world's electricity. Unlike the stars, the nuclear reactors that we have today work on the principle of nuclear fission. Scientists are working like madmen to make fusion reactors which have the potential of providing more energy with fewer disadvantages than fission reactors.ProductionChanges can occur in the structure of the nuclei of atoms. These changes are called nuclear reactions. Energy created in a nuclear reaction is called nuclear energy, or atomic energy.Nuclear energy is produced naturally and in man-made operations under human control.Naturally: Some nuclear energy is produced naturally. For example, the Sun and other stars make heat and light by nuclear reactions.Man-Made: Nuclear energy can be man-made too. Machines called nuclear reactors, parts of nuclear power plants, provide electricity for many cities. Man-made nuclear reactions also occur in the explosion of atomic and hydrogen bombs.Nuclear energy is produced in two different ways, in one, large nuclei are split to release energy. In the other method, small nuclei are combined to release energy.For a more detailed look at nuclear fission and nuclear fusion, consult the nuclear physics page.Nuclear Fission: In nuclear fission, the nuclei of atoms are split, causing energy to be released. The atomic bomb and nuclear reactors work by fission. The element uranium is the main fuel used to undergo nuclear fission to produce energy since it has many favorable properties. Uranium nuclei can be easily split by shooting neutrons at them. Also, once a uranium nucleus is split, multiple neutrons are released which are used to split other uranium nuclei. This phenomenon is known as a chain reaction.Fission of uranium 235 nucleus. Adapted from Nuclear Energy. Nuclear Waste*.Nuclear Fusion: In nuclear fusion, the nuclei of atoms are joined together, or fused. This happens only under very hot conditions. The Sun, like all other stars, creates heat and light through nuclear fusion. In the Sun, hydrogen nuclei fuse to make helium. The hydrogen bomb, humanity's most powerful and destructive weapon, also works by fusion. The heat required to start the fusion reaction is so great that an atomic bomb is used to provide it. Hydrogen nuclei fuse to form helium and in the process release huge amounts of energy thus producing a huge explosion.Milestones in the History of Nuclear EnergyAmore in depth and detailed history of nuclear energy is on the nuclear past page.December 2, 1942: The Nuclear Age began at the University of Chicago when Enrico Fermi made a chain reaction in a pile of uranium.August 6, 1945: The United States dropped an atomic bomb on Hiroshima, Japan, killing over 100,000.August 9, 1945: The United States dropped an atomic bomb on Nagasaki, Japan, killing over 40,000.November 1, 1952: The first large version of the hydrogen bomb (thousands of times more powerful than the atomic bomb) was exploded by the United States for testing purposes.February 21, 1956: The first major nuclear power plant opened in England.Advantages of Nuclear EnergyThe Earth has limited supplies of coal and oil. Nuclear power plants could still produce electricity after coal and oil become scarce.Nuclear power plants need less fuel than ones which burn fossil fuels. One ton of uranium produces more energy than is produced by several million tons of coal or several million barrels of oil.Coal and oil burning plants pollute the air. Well-operated nuclear power plants do not release contaminants into the environment.Disadvantages of Nuclear EnergyThe nations of the world now have more than enough nuclear bombs to kill every person on Earth. The two most powerful nations -- Russia and the United States -- have about 50,000 nuclear weapons between them. What if there were to be a nuclear war? What if terrorists got their hands on nuclear weapons? Or what if nuclear weapons were launched by accident?Nuclear explosions produce radiation. The nuclear radiation harms the cells of the body which can make people sick or even kill them. Illness can strike people years after their exposure to nuclear radiation.One possible type of reactor disaster is known as a meltdown. In such an accident, the fission reaction goes out of control, leading to a nuclear explosion and the emission of great amounts of radiation.In 1979, the cooling system failed at the Three Mile Island nuclear reactor near Harrisburg, Pennsylvania. Radiation leaked, forcing tens of thousands of people to flee. The problem was solved minutes before a total meltdown would have occurred. Fortunately, there were no deaths.In 1986, a much worse disaster struck Russia's Chernobyl nuclear power plant. In this incident, a large amount of radiation escaped from the reactor. Hundreds of thousands of people were exposed to the radiation. Several dozen died within a few days. In the years to come, thousands more may die of cancers induced by the radiation.Nuclear reactors also have waste disposal problems. Reactors produce nuclear waste products which emit dangerous radiation. Because they could kill people who touch them, they cannot be thrown away like ordinary garbage. Currently, many nuclear wastes are stored in special cooling pools at the nuclear reactors.The United States plans to move its nuclear waste to a remote underground dump by the year 2010.In 1957, at a dump site in Russia's Ural Mountains, several hundred miles from Moscow, buried nuclear wastes mysteriously exploded, killing dozens of people.Nuclear reactors only last for about forty to fifty years.The Future of Nuclear EnergySome people think that nuclear energy is here to stay and we must learn to live with it. Others say that we should get rid of all nuclear weapons and power plants. Both sides have their cases as there are advantages and disadvantages to nuclear energy. Still others have opinions that fall somewhere in between.What do you think we should do? After reviewing the pros and cons, it is up to you to formulate your own opinion. Read more about the politics of the issues or go to the forum to share your own opinions and see what others think.Nuclear power plant typesThe structure of a nuclear power plant in many aspects resembles to that of a conventional thermal power station, since in both cases the heat produced in the boiler (or reactor) is transported by some coolant and used to generate steam. The steam then goes to the blades of a turbine and by rotating it, the connected generator will produce electric energy. The steam goes to the condenser, where it condenses, i.e. becomes liquid again. The cooled down water afterwards gets back to the boiler or reactor, or in the case of PWRs to the steam generator.The great difference between a conventional and a nuclear power plant is how heat is produced. In a fossile plant, oil, gas or coal is fired in the boiler, which means that the chemical energy of the fuel is converted into heat. In a nuclear power plant, however, energy that comes from fission reactions is utilized.Several nuclear power plant (NPP) types are used for energy generation in the world. The different types are usually classified based on the main features of the reactor applied in them. The most widespread power plant reactor types are:Light water reactors: both the moderator and coolant are light water (H2O). To this category belong the pressurized water reactors (PWR) and boiling water reactors (BWR).Heavy water reactors (CANDU): both the coolant and moderator are heavy water (D2O).Graphite moderated reactors: in this category there are gas cooled reactors (GCR) and light water cooled reactors (RBMK).Exotic reactors (fast breeder reactors and other experimental installations).New generation reactors: reactors of the future.Nuclear power plant typesThe great difference between a conventional and a nuclear power plant is how heat is produced. In a fossile plant, oil, gas or coal is fired in the boiler, which means that the chemical energy of the fuel is converted into heat. In a nuclear power plant, however, energy that comes from fission reactions is utilized.Several nuclear power plant (NPP) types are used for energy generation in the world. The different types are usually classified based on the main features of the reactor applied in them. The most widespread power plant reactor types are:Light water reactors: both the moderator and coolant are light water (H2O). To this category belong the pressurized water reactors (PWR) and boiling water reactors (BWR).Heavy water reactors (CANDU): both the coolant and moderator are heavy water (D2O).Graphite moderated reactors: in this category there are gas cooled reactors (GCR) and light water cooled reactors (RBMK).Exotic reactors (fast breeder reactors and other experimental installations).New generation reactors: reactors of the future.