Total Pageviews

Monday, January 31, 2011

Pompidou Centre

Pompidou Centre, French Centre Pompidou, in full Centre National d’Art et de Culture Georges Pompidou (“Georges Pompidou National Art and Cultural Centre”), French national cultural centre on the Rue Beaubourg and on the fringes of the historic Marais section of Paris; a regional branch is located in Metz. It is named after the French president Georges Pompidou, under whose administration the museum was commissioned.

The Pompidou Centre was formally opened on January 31, 1977, by the French president, Valéry Giscard d’Estaing. Its overpowering industrial-looking exterior, which dwarfs its surroundings, attracted notoriety for its brightly coloured exterior pipes, ducts, and other exposed services. The architects were Renzo Piano of Italy and Richard Rogers of Britain. The Pompidou Centre quickly became a popular attraction and was reckoned to be the most frequently visited cultural monument in the world.

Primarily a museum and centre for the visual arts of the 20th century, the Pompidou Centre houses many separate services and activities. Its museum of modern art brought under one roof several public collections of modern art previously housed in a number of other Paris galleries. There are also frequent temporary exhibitions devoted to modern themes. In addition there is a large public library, a centre for industrial design, a film museum, and an important musical centre associated with the French conductor and composer Pierre Boulez, known as the Centre for Musical and Acoustical Research (Ircam). The music centre comprises rehearsal rooms, studios, and a concert hall and presents concerts devoted primarily to modern music.
The Pompidou Centre Metz, an outpost of the centre, opened in May 2010. The avant-garde building, designed by Ban Shigeru of Japan and Jean de Gastines of France, is situated in a park and features an undulating roof of woven timbre that was inspired by a Chinese bamboo hat. The Metz’s collection is devoted to modern art and includes works by Pablo Picasso, Henri Matisse, and Joan Miró.

Luna

Luna 9, the first spacecraft to soft-land on the Moon. It was launched by the Soviet Union January …
[Credit: Novosti Press Agency]Luna, any of a series of 24 unmanned Soviet lunar probes launched between 1959 and 1976.

Luna 2.Luna 1 (launched Jan. 2, 1959) was the first spacecraft to escape Earth’s gravity. It failed to impact the Moon as planned and became the first man-made object to go into orbit around the Sun. Luna 2 (launched Sept. 12, 1959) was the first spacecraft to strike the Moon, and Luna 3 (Oct. 4, 1959) made the first circumnavigation of the Moon and returned the first photographs of its far side. Luna 9 (Jan. 31, 1966) made the first successful lunar soft landing. Luna 16 (Sept. 12, 1970) was the first unmanned spacecraft to carry lunar soil samples back to Earth. Luna 17 (Nov. 10, 1970) soft-landed a robot vehicle, Lunokhod 1, for exploration. It also contained television equipment, by means of which it transmitted live pictures of several kilometres of the Moon’s surface. Luna 22 (May 29, 1974) orbited the Moon 2,842 times while conducting space research in its vicinity. Luna 24 (Aug. 9, 1976) returned with lunar soil samples taken from a depth of seven feet (about two metres) below the surface.

Sunday, January 30, 2011

About Ampa Skywalk



Ampa Skywalk, the dream destination of Chennai for shopping & entertainment experience situated at No. 1, Nelson Manickkam Road, Aminjikarai, Chennai -29 is in vicinity to areas like Anna Nagar, Kilpauk, Nungambakkam, Egmore, T.Nagar, Chetpet etc. Skywalk rightly known as the “Mall for All” stretches over 3.15 Lakhs Sq. Ft. and is perfectly complemented with 3.50 Lakhs Sq. Ft. of parking facility ensuring a complete shopper’s experience. The mall’s anchor tenants include:-

   Westside sprawling over 25,000 sq. ft. provide customers one stop shopping experience

   Tata Star Bazaar, spreading over 56,000 sq. ft is Chennai’s first Hyper market providing a complete end-to-end solution on all      house hold requirements

   PVR, premier multiplex with 7 screens summing up to a total of 1800 seats providing finest movie experience.

   Landmark
The mall offers 3 floors of complete retail experience of International & National brands – ensuring all the category of products required by a shopper is met and is satisfied by the best of the brands available. Each of the floor level is encapsulated with a zoning ensuring that the shopper has complete hassle free experience in his search for his requirement.

   The Ground level has the premium & international brands to give the customer the connect with the elite brands with likes of Mc      Donalds, KFC, The Body Shop, Giordano, Benetton, Rado, Levi's etc.

   The First floor is zoned for the youth and men’s apparels which include Adidas, Reebok, Jus Sports, Planet Sports, Lee, Wrangler,     Allen Solly, Coffee Day etc.

   The Second floor categorized for ladies, children & home products boasts of brands like BIBA, Soles, Mustard, Health & Glow etc.

biofuel

Primary Contributors: Clarence Lehman, Noelle Eckley Selin


A worker pumping palm-oil-derived biodiesel fuel into a tanker at a plant in Ipoh, Malaysia.
[Credit: Zainal Abd Halim—Reuters/Landov]
Ethanol gas fuel pump delivering the E85 mixture to an automobile in Washington state, U.S.
[Credit: © Carolina K. Smith, M.D./Shutterstock.com]DuPont scientist Max Li developing new biofuels in his state-of-the-art fermentation lab at the …
[Credit: PRNewsFoto/DuPont/AP Images]biofuel, any fuel that is derived from biomass that is, plant material or animal waste. Since such feedstock material can be replenished readily, biofuel is considered to be a source of renewable energy, unlike fossil fuels such as petroleum, coal, and natural gas. Biofuel is perceived by its advocates as a cost-effective and environmentally benign alternative to petroleum and other fossil fuels, particularly within the context of rising petroleum prices and increased concern over the contributions made by fossil fuels to global warming. Many critics express concerns about the scope of the expansion of certain biofuels because of the economic and environmental costs associated with the refining process and the removal of vast areas of arable land from food production.

Types of biofuels

Some long-exploited biofuels, such as wood, can be used directly as a raw material that is burned to produce heat. The heat, in turn, can be used to run generators in a power plant to produce electricity. A number of existing power facilities burn grass, wood, or other kinds of biomass.

A cutting machine on a plantation in southeastern Brazil harvesting sugarcane, the primary source …
[Credit: Andre Penner/AP]An ethanol production plant in South Dakota, U.S.
[Credit: © Jim Parkin/Shutterstock.com]Liquid biofuels are of particular interest because of the vast infrastructure already in place to use them, especially for transportation. The liquid biofuel in greatest production is ethanol (ethyl alcohol), which is made by fermenting starch or sugar. Brazil and the United States are among the leading producers of ethanol. In the United States, ethanol biofuel is made primarily from corn (maize) grain, and it is typically blended with gasoline to produce “gasohol,” a fuel that is 10 percent ethanol. In Brazil, ethanol biofuel is made primarily from sugarcane, and it is commonly used as a 100-percent-ethanol fuel or in gasoline blends containing 85 percent ethanol.

The second most common liquid biofuel is biodiesel, which is made primarily from oily plants (such as the soybean or oil palm) and to a lesser extent from other oily sources (such as waste cooking fat from restaurant deep-frying). Biodiesel, which has found greatest acceptance in Europe, is used in diesel engines and usually blended with petroleum diesel fuel in various percentages.

Other biofuels include methane gas which can be derived from the decomposition of biomass in the absence of oxygen and methanol, butanol, and dimethyl ether which are in development.At present, much focus is on the development of methods to produce ethanol from biomass that possesses a high cellulose content. This cellulosic ethanol could be produced from abundant low-value material, including wood chips, grasses, crop residues, and municipal waste. The mix of commercially used biofuels will undoubtedly shift as these fuels are developed, but the range of possibilities presently known could furnish power for transportation, heating, cooling, and electricity.

Economic and environmental considerations

Unloading kernels of corn (maize) from a truck into a delivery chute at a bioethanol plant in …
[Credit: Jason Reed—Reuters/Landov]In evaluating the economic benefits of biofuels, the energy required to produce them has to be taken into account. For example, the process of growing corn to produce ethanol consumes fossil fuels in farming equipment, in fertilizer manufacturing, in corn transportation, and in ethanol distillation. In this respect ethanol made from corn represents a relatively small energy gain; the energy gain from sugarcane is greater and that from cellulosic ethanol could be even greater.

Biofuels also supply environmental benefits but, depending on how they are manufactured, can also have serious environmental drawbacks. As a renewable energy source, plant-based biofuels in principle make little net contribution to global warming and climate change; the carbon dioxide (a major greenhouse gas) that enters the air during combustion will have been removed from the air earlier as growing plants engage in photosynthesis. Such a material is said to be “carbon neutral.” In practice, however, the industrial production of agricultural biofuels can result in additional emissions of greenhouse gases that may offset the benefits of using a renewable fuel. These emissions include carbon dioxide from the burning of fossil fuels during the production process and nitrous oxide from soil that has been treated with nitrogen fertilizer. In this regard, cellulosic biomass is considered to be more beneficial.

Land use is also a major factor in evaluating the benefits of biofuels. Corn and soybeans are important foods, and their use in producing fuel can therefore affect the economics of food price and availability. By 2007 about one-fifth of the corn output in the United States was allocated to the production of biofuel, and one study showed that even if all U.S. corn land was used to produce ethanol, it could replace just 12 percent of gasoline consumption. In addition, crops grown for biofuel can compete for the world’s natural habitats. For example, emphasis on ethanol derived from corn is shifting grasslands and brushlands to corn monocultures, and emphasis on biodiesel is bringing down ancient tropical forests to make way for palm plantations. Loss of natural habitat can change the hydrology, increase erosion, and generally reduce biodiversity of wildlife areas. The clearing of land can also result in the sudden release of a large amount of carbon dioxide as the plant matter that it contains is burned or allowed to decay.

Some of the disadvantages of biofuels apply mainly to low-diversity biofuel sources—corn, soybeans, sugarcane, oil palms which are traditional agricultural crops. One alternative involves the use of highly diverse mixtures of species, with the North American tallgrass prairie as a specific example. Converting degraded agricultural land that is out of production to such high-diversity biofuel sources could increase wildlife area, reduce erosion, cleanse waterborne pollutants, store carbon dioxide from the air as carbon compounds in the soil, and ultimately restore fertility to degraded lands. Such biofuels could be burned directly to generate electricity or converted to liquid fuels as technologies develop.

The proper way to grow biofuels to serve all needs simultaneously will continue to be a matter of much experimentation and debate, but the fast growth in biofuel production will likely continue. In the European Union, for example, biofuels are planned to account for 5.75 percent of transport fuels by 2010, and 10 percent of European vehicles are expected to run exclusively on biofuels by 2020. In the United States the Energy Independence and Security Act of 2007 mandated the use of 136 billion litres (36 billion gallons) of biofuels annually by 2020, more than a sixfold increase over 2006 production levels. The legislation also requires, with certain stipulations, that 79 billion litres (21 billion gallons) of the total amount be biofuels other than corn-derived ethanol, and it continued certain government subsidies and tax incentives for biofuel production. In addition, the technology for producing cellulosic ethanol is being developed at a number of pilot plants in the United States.

One distinctive promise of biofuels is that, in combination with an emerging technology called carbon capture and storage, the process of producing and using biofuels may be capable of perpetually removing carbon dioxide from the atmosphere. Under this vision, biofuel crops would remove carbon dioxide from the air as they grow, and energy facilities would capture the carbon dioxide given off as biofuels are burned to generate power. Captured carbon dioxide could be sequestered (stored) in long-term repositories such as geologic formations beneath the land, in sediments of the deep ocean, or conceivably as solids such as carbonates.

Saturday, January 29, 2011

Black Eyed Peas

Black Eyed Peas, American musical group with a multiracial lineup and an eclectic range of styles encompassing hip-hop, dance, and pop. The Black Eyed Peas originated in the underground hip-hop movement of the 1990s. After the dissolution of their group Atban Klann, rappers will.i.am (byname of William James Adams, Jr.; b. March 15, 1975, Los Angeles, Calif., U.S.) and apl.de.ap (byname of Allan Pineda Lindo; b. Nov. 28, 1974, Angeles City, Pampanga, Phil.) recruited MC and dancer Taboo (byname of Jaime Luis Gomez; b. July 14, 1975, East Los Angeles, Calif.) to form the Black Eyed Peas. The group’s debut recording, Behind the Front (1998), gained attention for its positive socially conscious lyrics and musical dexterity.

Bridging the Gap (2000), boasting guest appearances by hip-hop performers Mos Def, De La Soul, and Wyclef Jean, continued in a similar vein. With the addition of vocalist Fergie (byname of Stacy Ann Ferguson; b. March 27, 1975, Hacienda Heights, Calif.) in 2001, however, the group abandoned the hip-hop underground for the pop mainstream. Elephunk (2003) yielded the upbeat club-friendly hit singles “Where Is the Love?” (a collaboration with Justin Timberlake), “Hey Mama,” and “Let’s Get It Started” (titled “Let’s Get Retarded” on the album) and went on to sell more than two million copies. Its follow-up, Monkey Business (2005), featuring the exuberant top-five hits “Don’t Phunk with My Heart” and “My Humps,” was even more commercially successful.

After an extensive concert tour in support of Monkey Business, the group was dormant for several years. In 2006 Fergie released a multiplatinum solo record, The Dutchess. Will.i.am, who produced much of that album, released his own Songs About Girls the following year. The Black Eyed Peas returned in 2009 with The E.N.D., which cemented their prominence in the pop music world. Between the singles “Boom Boom Pow” and “I Gotta Feeling,” the group occupied the number one position on the Billboard Hot 100 for an unprecedented 26 straight weeks in the middle of that year. In 2010 they won three Grammy Awards, including best pop vocal album.

Cloud Computing


Primary Contributor: Nicholas Carr


The José Vasconcelos Library in Mexico City, Mexico, includes some 700 computer terminals …
[Credit: AP]cloud computing, method of running application software and storing related data in central computer systems and providing customers or other users access to them through the Internet.

Early development

The origin of the expression cloud computing is obscure, but it appears to derive from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems. The term came into popular use in 2008, though the practice of providing remote access to computing functions through networks dates back to the mainframe time-sharing systems of the 1960s and 1970s. In his 1966 book The Challenge of the Computer Utility, the Canadian electrical engineer Douglas F. Parkhill predicted that the computer industry would come to resemble a public utility “in which many remotely located users are connected via communication links to a central computing facility.”

For decades, efforts to create large-scale computer utilities were frustrated by constraints on the capacity of telecommunications networks such as the telephone system. It was cheaper and easier for companies and other organizations to store data and run applications on private computing systems maintained within their own facilities.

The constraints on network capacity began to be removed in the 1990s when telecommunications companies invested in high-capacity fibre-optic networks in response to the rapidly growing use of the Internet as a shared network for exchanging information. In the late 1990s, a number of companies, called application service providers (ASPs), were founded to supply computer applications to companies over the Internet. Most of the early ASPs failed, but their model of supplying applications remotely became popular a decade later, when it was renamed cloud computing.

Cloud services and major providers

Cloud computing encompasses a number of different services. One set of services, sometimes called software as a service (SaaS), involves the supply of a discrete application to outside users. The application can be geared either to business users (such as an accounting application) or to consumers (such as an application for storing and sharing personal photographs). Another set of services, variously called utility computing, grid computing, and hardware as a service (HaaS), involves the provision of computer processing and data storage to outside users, who are able to run their own applications and store their own data on the remote system. A third set of services, sometimes called platform as a service (PaaS), involves the supply of remote computing capacity along with a set of software-development tools for use by outside software programmers.
Early pioneers of cloud computing include Salesforce.com, which supplies a popular business application for managing sales and marketing efforts; Google, Inc., which in addition to its search engine supplies an array of applications, known as Google Apps, to consumers and businesses; and Amazon Web Services, a division of online retailer Amazon.com, which offers access to its computing system to Web-site developers and other companies and individuals. Cloud computing also underpins popular social networks and other online media sites such as Facebook, MySpace, and Twitter. Traditional software companies, including Microsoft Corporation, Apple Inc., Intuit Inc., and Oracle Corporation, have also introduced cloud applications.
Cloud-computing companies either charge users for their services, through subscriptions and usage fees, or provide free access to the services and charge companies for placing advertisements in the services. Because the profitability of cloud services tends to be much lower than the profitability of selling or licensing hardware components and software programs, it is viewed as a potential threat to the businesses of many traditional computing companies.

Data centres and privacy

Construction of the large data centres that run cloud-computing services often requires investments of hundreds of millions of dollars. The centres typically contain thousands of server computers networked together into parallel-processing or grid-computing systems. The centres also often employ sophisticated virtualization technologies, which allow computer systems to be divided into many virtual machines that can be rented temporarily to customers. Because of their intensive use of electricity, the centres are often located near hydroelectric dams or other sources of cheap and plentiful electric power.

Because cloud computing involves the storage of often sensitive personal or commercial information in central database systems run by third parties, it raises concerns about data privacy and security as well as the transmission of data across national boundaries. It also stirs fears about the eventual creation of data monopolies or oligopolies. Some believe that cloud computing will, like other public utilities, come to be heavily regulated by governments.

Friday, January 28, 2011

computer virus

computer virus,  a portion of a program code that has been designed to furtively copy itself into other such codes or computer files. It is usually created by a prankster or vandal to effect a nonutilitarian result or to destroy data and program code.

A virus consists of a set of instructions that attaches itself to other computer programs, usually in the computer’s operating system, and becomes part of them. In most cases, the corrupted programs continue to perform their intended functions but surreptitiously execute the virus’s instructions as well. A virus is usually designed to execute when it is loaded into a computer’s memory. Upon execution, the virus instructs its host program to copy the viral code into, or “infect,” any number of other programs and files stored in the computer. The infection can then transfer itself to files and code on other computers through magnetic disks or other memory-storage devices, computer networks, or online systems. The replicating viruses often multiply until they destroy data or render other program codes meaningless. A virus may simply cause a harmless joke or cryptic message to appear on a computer user’s video monitor each time he turns on his computer. A more damaging virus can wreak havoc on an extremely large computer system within a matter of minutes or hours, causing it to crash and thereby destroy valuable data.

Challenger disaster

U.S. space shuttle Challenger just seconds after its explosive destruction on Jan. …
[Credit: NASA]Challenger disaster, explosion of the U.S. space shuttle orbiter Challenger, shortly after its launch from Cape Canaveral, Fla., on Jan. 28, 1986, which claimed the lives of seven astronauts.

The primary goal of shuttle mission 51-L was to launch the second Tracking and Data Relay Satellite (TDRS-B). It also carried the Spartan Halley spacecraft, a small satellite that was to be released by Challenger and picked up two days later after observing Halley’s Comet during its closest approach to the Sun.

Christa McAuliffe.
[Credit: NASA/Johnson Space Center]Greatest visibility among the crew went to teacher-in-space Christa McAuliffe of Concord, N.H., the winner of a national screening begun in 1984. McAuliffe was to conduct at least two lessons from orbit and then spend the following nine months lecturing students across the United States. The goal was to highlight the importance of teachers and to interest students in high-tech careers. Other members of the crew were commander Francis (Dick) Scobee, pilot Michael Smith, mission specialists Ellison Onizuka, Judith Resnik, and Ronald McNair, and Hughes Aircraft engineer Gregory Jarvis.
The mission experienced trouble at the outset, as the launch was postponed for several days, partly because of delays in getting the previous shuttle mission, 61-C (Columbia), back on the ground. On the night before the launch, central Florida was swept by a severe cold wave that deposited thick ice on the launch pad. On launch day, January 28, liftoff was delayed until 11:38 am. All appeared to be normal until after the vehicle emerged from “Max-Q,” the period of greatest aerodynamic pressure. Mission Control told Scobee, “Challenger, go with throttle up,” and seconds later the vehicle disappeared in an explosion just 73 seconds after liftoff, at an altitude of 14,000 metres (46,000 feet). Tapes salvaged from the wreckage showed that the instant before breakup Smith said “Uh-oh,” but nothing else was heard. Debris rained into the Atlantic Ocean for more than an hour after the explosion; searches revealed no sign of the crew.

The incident immediately grounded the shuttle program. An intensive investigation by the National Aeronautics and Space Administration (NASA) and a commission appointed by U.S. Pres. Ronald Reagan and chaired by former secretary of state William Rogers followed. Other members of the commission included astronauts Neil Armstrong and Sally Ride, test pilot Chuck Yeager, and physicist Richard Feynman. What emerged was an appalling pattern of assumptions that the vehicle could survive minor mishaps and be pushed even further. The ill-fated launch brought to the fore the difficulties that NASA had been experiencing for many years in trying to accomplish too much with too little money.

The immediate cause of the accident was suspected within days and was fully established within a few weeks. The severe cold reduced the resiliency of two rubber O-rings that sealed the joint between the two lower segments of the right-hand solid rocket booster. (At a commission hearing, Feynman convincingly demonstrated the loss of O-ring resiliency by submerging an O-ring in a glass of ice water.) Under normal circumstances, when the shuttle’s three main engines ignited they pressed the whole vehicle forward, and the boosters were ignited when the vehicle swung back to centre. On the morning of the accident, an effect called “joint rotation” occurred, which prevented the rings from resealing and opened a path for hot exhaust gas to escape from inside the booster. Puffs of black smoke appeared on the far side of the booster in a spot not visible to most cameras.

As the vehicle ascended, the leak expanded, and after 59 seconds a 2.4-metre (8-foot) stream of flame emerged from the hole. This grew to 12 metres (40 feet) and gradually eroded one of three struts that secured the booster’s base to the large external tank carrying liquid hydrogen and liquid oxygen for the orbiter engines. At the same time, thrust in the booster lagged slightly, although within limits, and the nozzle steering systems tried to compensate. When the strut broke, the booster’s base swiveled outward, forcing its nose through the top of the external fuel tank and causing the whole tank to collapse and explode. Through ground tracking cameras this was seen as a brief flame licking from a concealed spot on the right side of the vehicle a few seconds before everything disappeared in the fireball. Even if the plume had been seen at liftoff, there would have been no hope for crew escape, because the shuttle orbiter could not survive high-speed separation from the tank until the last seconds of the boosters’ two-minute burn.

Challenger broke up in the explosion, but the forward section with the crew cabin was severed in one piece; it continued to coast upward with other debris, including wings and still-flaming engines, and then plummeted to the ocean. It was believed that the crew survived the initial breakup but that loss of cabin pressure rendered them unconscious within seconds since they did not wear pressure suits. Death probably resulted from oxygen deficiency minutes before impact.

The boosters also survived the fireball and righted themselves to continue flying, something totally unexpected. Range safety officers finally detonated their charges 30 seconds later to prevent them from overflying land. After the accident, NASA immediately began work on a redesigned solid booster for future launches.
An intensive salvage operation was organized to retrieve as much of the wreckage as possible and the bodies of the crew. The task was complicated by the force of the explosion and the altitude at which it occurred, as well as the separate paths taken by the boosters.

The Rogers Commission report, delivered on June 6 to the president, faulted NASA as a whole, and its Marshall Space Flight Center in Huntsville, Ala., and contractor Morton Thiokol, Inc., in Ogden, Utah, in particular, for poor engineering and management. Marshall was responsible for the shuttle boosters, engines, and tank, while Morton Thiokol manufactured the booster motors and assembled them at the Kennedy Space Center at Cape Canaveral, Fla.

The Rogers Commission heard disturbing testimony from a number of engineers who had been expressing concern about the reliability of the seals for at least two years and who had warned superiors about a possible failure the night before 51-L was launched. One of the Rogers Commission’s strongest recommendations was to tighten the communication gap between shuttle managers and working engineers. In response to this implied criticism that its quality-control measures had become slack, NASA added several more checkpoints in the shuttle bureaucracy, including a new NASA safety office and a shuttle safety advisory panel, in order to prevent such a “flawed” decision to launch from being made again.

Aside from these internal fixes at NASA, however, the Rogers Commission addressed a more fundamental problem. In NASA’s efforts to streamline shuttle operations in pursuit of its declared goal of flying 24 missions a year, the commission said, the agency had simply been pushing too hard. The shuttle program had neither the personnel nor the spare parts to maintain such an ambitious flight rate without straining its physical resources or overworking its technicians.

This judgment cut to the core of the way in which the national space program had been conducted in the shuttle era. Indeed, the Challenger accident merely focused attention on more deeply seated problems that had existed for as long as 15 years. From the time it was approved by Pres. Richard Nixon in 1972, the shuttle had been conceived as a “do-everything” vehicle for carrying every kind of space payload, from commercial and scientific satellites to military spacecraft to probes bound for the outer planets. NASA’s fleet of conventional “expendable” rockets such as the Delta and Atlas had been phased out in the shuttle era as a result and were being used primarily to reach polar orbits that the shuttle could not reach from Cape Canaveral.

Although this reliance on the shuttle was the officially stated national space policy, the Department of Defense began to retreat from relying exclusively on the shuttle even before the Challenger accident. Concerned that shuttle launch delays would jeopardize the assured access to space of high-priority national security satellites, the Air Force in 1985 began a program of buying advanced Titan rockets as “complementary expendable launch vehicles” for its own use.

Other, less powerful groups came forward after the Challenger accident to express their long-standing unhappiness with exclusive reliance on the shuttle for their access to space. Among those calling for a “mixed fleet” of shuttles and expendable launchers were scientists whose missions now faced long delays because the shuttle had become the only existing means of carrying their spacecraft.

By July, when NASA announced that the shuttle would not be ready to fly again until 1988, there was still no decision from Congress or the White House as to whether another orbiter would be built to replace Challenger. Proponents argued that another vehicle perhaps two more would be needed to meet the launch needs of the 1990s, which would include construction of NASA’s international space station, a permanent facility in Earth orbit.

In mid-August Pres. Ronald Reagan announced that construction of a replacement shuttle orbiter (later named Endeavour) would begin immediately. When the shuttle resumed service, however, it would no longer be in the business of launching satellites for paying customers but would be devoted almost exclusively to defense and scientific payloads. The Reagan administration had long had the goal of stimulating a private space launch industry, and now, with the removal of a heavily subsidized competitor from the market, three different companies stepped forward within a week’s time to announce plans for operating commercial versions of the Delta, Titan, and Atlas/Centaur launchers. (See the Britannica Classic The Year the Space Program Stopped by Tony Reichhardt.)

Thursday, January 27, 2011

Ferruccio Lamborghini

lamborghiniFerruccio Lamborghini,  Italian industrialist (b. April 28, 1916, Cento, Italy--d. Feb. 20, 1993, Perugia, Italy), founded a luxury car company that produced some of the fastest, most expensive, and sought-after sports cars in the world. Lamborghini worked as a mechanic in the Italian army during World War II, and after the war he started a tractor company to build farm implements using recycled parts from Allied army surplus and abandoned German tanks. In 1963 he opened a state-of-the-art factory to manufacture a sports car that would challenge the top-ranked Ferrari high-performance cars. The innovative Lamborghini 350GT debuted that year, and three years later the company surprised the automobile industry with the Miura, a low-slung, V-12 two-seater that could exceed 298 km/h (185 mph). By the end of the 1960s Lamborghini’s luxury sports cars were in demand by sports car enthusiasts and by celebrities impressed with the cars’ quality and panache. In 1973 Lamborghini sold his share of the firm and retired to a country estate in Umbria, where he established an automobile museum, cultivated grapes, and produced premium table wine.

Ferdinand Porsche

Ferdinand Porsche, 1940.
[Credit: German Federal Archive (Bundesarchiv), Bild 183-2005-1017-525, photograph: o.Ang.] Ferdinand Porsche,  (b. Sept. 3, 1875, Maffersdorf, Austria d. Jan. 30, 1951, Stuttgart, W.Ger.), Austrian automotive engineer who designed the popular Volkswagen car.

Porsche became general director of the Austro-Daimler Company in 1916 and in 1923 moved to the Daimler Company in Stuttgart. He left in 1931 and formed his own firm to design sports and racing cars. Porsche became deeply involved in Adolf Hitler’s project for a “people’s car” and with his son Ferdinand, known as Ferry, was responsible for the initial design of the Volkswagen in 1934. During World War II the Porsches designed military vehicles, notably the Tiger tank. After the war the elder Porsche was imprisoned by the French for a time. In 1950 the Porsche sports car was introduced. The Porsche Museum opened in Zuffenhausen, a suburb of Stuttgart, in 2009.

Wednesday, January 26, 2011

Bessie Coleman

Bessie Coleman.
[Credit: NASA] Bessie Coleman, byname of Elizabeth Coleman   (b. Jan. 26, 1893, Atlanta, Texas, U.S. d. April 30, 1926, Jacksonville, Fla.), American aviator and a star of early aviation exhibitions and air shows.
One of 13 children, Coleman grew up in Waxahatchie, Texas, where her mathematical aptitude freed her from working in the cotton fields. She attended college in Langston, Oklahoma, briefly, then moved to Chicago, where she worked as a manicurist and restaurant manager and became interested in the then-new profession of aviation.

Discrimination thwarted Coleman’s attempts to enter aviation schools in the United States. Undaunted, she learned French and at age 27 was accepted at the Caudron Brothers School of Aviation in Le Crotoy, France. Black philanthropists Robert Abbott, founder of the Chicago Defender, and Jesse Binga, a banker, assisted with her tuition. On June 15, 1921, she became the first American woman to obtain an international pilot’s license from the Fédération Aéronitique Internationale. In further training in France, she specialized in stunt flying and parachuting; her exploits were captured on newsreel films. She returned to the United States, where racial and gender biases precluded her becoming a commercial pilot. Stunt flying, or barnstorming, was her only career option.

Bessie Coleman, U.S. commemorative stamp, 1995.
Coleman staged the first public flight by an African American woman in America on Labor Day, September 3, 1922. She became a popular flier at aerial shows, though she refused to perform before segregated audiences in the South. Speaking at schools and churches, she encouraged blacks’ interest in aviation; she also raised money to found a school to train black aviators. Before she could found her school, however, during a rehearsal for an aerial show, the plane carrying Coleman spun out of control, catapulting her 2,000 feet to her death.

Trascendental Transylvania


Image of llubin

Our bus drove out onto the steamy tarmac of modern Ataturk International Airport in Istanbul. It came to a halt in front of a tiny propeller plane that would fly me and my fellow passengers to my next destination on ‘The LL World Tour:’ Bucharest, Romania. The aircraft looked like it was built in 1973 the gray leather seats were worn around the edges and super shiny in the middle—from the hundreds of butts that had sat there before me. There were no high tech movie screens or anything near what I experienced on Emirates Air just a few months back on my flight into Istanbul. In fact, the flight attendants still did the safety ‘song and dance’ themselves.


Castle in Sinaia, Romania. Lisa Lubin.
Castle in Sinaia, Romania

After one and a half hours bobbing and weaving our way through the puffy-clouded skies, we landed in the former Soviet city of Bucharest and one of the newest countries of the European Union. Back in independent traveler mode I hit the ground running smiling through passport control, picking up my checked suitcase, getting Romanian Lei out of the ATM, and finding public bus number 783 into the city. It felt good to be ‘on the move’ once again.

The ride into town surprisingly was quite lovely and we passed what appeared to be a very green city with shady trees all around. Because of its wide leafy boulevards and public squares, Bucharest is sometimes called the ‘Paris of Eastern Europe.’ I think practically all countries I’ve been in seem to claim their city as some sort of ‘Paris:’ Buenos Aires the Paris of South America, Dalat, Vietnam the Paris of Vietnam, Montreal the Paris of Canada, Dubai the Paris…no I’m just kidding about that last one.
After hearing several warnings from various friends who said Bucharest was not so nice (I can’t confirm this for myself), I planned on seeing more of the countryside and headed straight for the train station. Oh, and by the way, I was back in a country where the cars stop for people ah, civilization.

Sibiu, Romania. Lisa Lubin.
Sibiu, Romania

Before I left Istanbul, I’d wondered if I’d be lonely my first few days back out on my own again. But so far, this was not the case. On the bus I’d spotted a couple guys that seemed like English-speaking tourists…well it’s not detective work to spot a backpack or suitcase, but as far as the English…they just had a ‘look.’ I bumped into them in the metro station and said ‘hello.’ And just like that, as is so easy in the travel world, I had two new best friends for the next couple days. Derek, originally from Los Angeles and more recently Seattle, was on a whirlwind tour of Eastern Europe. Now that I’m an experienced loafer…committing my time in big cities to at least a week (to several months) or small towns to at least three nights, I hate to see others rushing around so much. He was spending, at most two nights in each city he hit. He’s getting a small taste of the biggest tourist areas, but never a genuine feel for the ‘real’ place, but, hey, at least he’s traveling. Bobby lived north of London and was a bit of a real estate mogul, buying up properties all over the globe, from Dubai to Sofia and Venezuela to Poland.

Main Square in Brasov, Romania. Lisa Lubin.
Main Square in Brasov, Romania

I was planning on taking the train to Brașov (Bra-shov), but the guys had read that something called a ‘Maxi Taxi’ (a minivan, basically) was cheaper and faster. We schlepped our bags and ourselves over crumbling sidewalks several blocks away from the train station in the not-so-quaint part of the city where locals pointed us to the ‘maxi taxi stand.’ Unfortunately for us, when we got there we found out the one we were looking for did not even stop here. A tall, young Romanian guy with a cigarette dangling precariously out of the side of his mouth who never made eye contact, told us gruffly that we could head across town to find it or just go back to the train station and take the next train. So much for the ‘two heads are better than one’ theory—I guess three heads are like a quarter brain? I had planned on buying a ticket and waiting for the train. Now, I was walking around in circles in Bucharest. We headed back to the train station and hopped on the next train for Transylvania. Of course, we found out later we’d gotten on the excruciatingly slow local train that made every stop along the way and would take about four hours instead of the express train that would have gotten us there in half the time. Oh well it was just how things were going. At least this gave us time to get to know each other in our sweaty, hot train car with sticky pleather seats that we shared with friendly, yet pungent villagers.During the long rail journey, we climbed higher into the mountains where the air was fresh and crisp and my spirits lifted.  Brașov is the quintessential medieval European city full of old Bavarian-like stuccoed buildings with red-tiled roofs and petunias spilling out of window flower boxes lining cobblestoned lanes where locals stroll arm in arm. A church bell gongs in the distance and the air is misty and cool. Cafes line the streets and idle chatter fills the air. I love it.

Sibiu, Romania. Lisa Lubin.
Sibiu, Romania

Romania is perhaps the most beautiful country in Eastern Europe, a last bastion of a medieval past long since lost elsewhere. The mighty Carpathian mountain range cuts right through the heart of it surprising you with jagged mountain vistas and lush green valleys with several fortified centuries-old villages. You can just imagine the Middle Ages with fierce horsemen galloping by on their way to attack and pillage the next tiny hamlet, killing the men, stealing the women, and eating wild beasts with their bare hands. The mythic land of Transylvania is the region of Vlad (the Impaler) Țepeș, the real-life torturous prince who was the inspiration for 19th-century novelist Bram Stoker’s Count Dracula. The intact medieval villages of this region are a trip back in time, with their charming town squares, foreboding stone watchtowers, stately churches, and surrounding Bavarian-style homes reflecting the Hungarian and German ancestry. I wasn’t sure what exactly Romania would be like and this was just perfect. I am reminded once again how much I just love the look and ‘feel’ of Europe and am glad to be back.


Lisa Lubin is an Emmy-award-winning television writer/producer/photographer/vagabond. After 15 years in broadcast television she took a sabbatical of sorts, traveling and working her way around the world for nearly three years.

Tuesday, January 25, 2011

On the Other Side of the Mirror (Picture of the Day)


The surface of the water is the border between two so different worlds, but it can also be used to create the “total reflection effect” for very special images. This photo was taken in the waters of the Indonesian province of Papua.
On the Other Side of the Mirror, Michele Davino 

Michele Davino is an underwater photographer who has taken part in several international competitions over the last several years with excellent results, including the Festival Mondial de l’Image Sous-Marine (Marseille, France), the Los Angeles Underwater Photographic Society’s annual event (California), the Underwater Images Photo Competition (Cincinnati, Ohio), Underwater Images Cincinnati USA, and Scuba Diver Australasia’s Through the Lens competition (Singapore).

Eileen Collins

Eileen Collins,  (b. November 19, 1956, Elmira, New York, U.S.), American astronaut, the first woman to pilot and, later, to command a U.S. space shuttle.

Collins’s love of airplanes and flying began as a child. At age 19 she saved money earned from part-time jobs and began taking flying lessons. She graduated with a bachelor’s degree in mathematics and economics from Syracuse (New York) University in 1978. She then became one of four women admitted to Air Force Undergraduate Pilot Training at Vance Air Force Base in Oklahoma. The first women astronauts were doing their parachute training at the same base at that time, and Collins realized that the goal of becoming an astronaut was within reach. In 1979 she became the Air Force’s first female flight instructor and for the next 11 years taught both flying and math. As a C-141 Starlifter transport aircraft commander, Collins participated in the U.S.-led invasion of Grenada in 1983, delivering troops and evacuating medical students. She continued her training at the Air Force’s Institute of Technology and was one of the first women to attend Air Force Test Pilot School, from which she graduated in 1990. She eventually achieved the Air Force rank of colonel. She also earned an M.S. in operations research from Stanford University in 1986 and an M.A. in space systems management from Webster University, St. Louis, Missouri, in 1989.

Selected as an astronaut in 1990, Collins became the first woman pilot of a U.S. space shuttle in February 1995, serving on the orbiter Discovery for a rendezvous and docking mission to the Russian space station Mir. She piloted a second shuttle flight in May 1997, successfully docking the Atlantis with Mir to transfer personnel, equipment, and supplies. With hundreds of hours in space to her credit, Collins became the first woman to command a shuttle mission in July 1999, taking Columbia into Earth orbit to deploy the Chandra X-ray Observatory. After Columbia was destroyed on a subsequent flight in February 2003, the entire shuttle fleet was grounded until July 2005, when Collins commanded Discovery on a “return to flight” mission to test new safety modifications and to resupply the International Space Station (ISS). Prior to Discovery’s docking with the ISS, Collins guided the spacecraft through a full 360° pitch (nose-over-tail) maneuver the first person to do so with an orbiter which allowed ISS crew members to photograph the spacecraft’s belly for possible damage.

Monday, January 24, 2011

Exercise and Colds

Exercise and Colds: 5 Questions for Human Performance Researcher David Nieman

David Nieman.David Nieman is a professor of health and exercise science at Appalachian State University and the Director of the Human Performance Laboratory at the North Carolina Research Campus. Nieman has studied health, exercise physiology, and nutrition for more than 20 years and is the author of multiple books, including Exercise Testing and Prescription: A Health-Related Approach, the 7th edition of which was published this year.

Nieman’s most recent study, on exercise and colds, revealed that physically active people are at a much lower risk of becoming infected with the common cold virus and have fewer symptoms when they do become infected than those who are less active. With cold season now upon the Northern Hemisphere, Britannica science editor Kara Rogers posed a few questions to Nieman on his new findings.
 

Britannica: Researchers have long suspected that exercise is an effective way to prevent the common cold, but there has been little scientific evidence supporting this conclusion. In your study, what did you discover that confirmed the beneficial role of physical activity in fighting off colds?

Nieman: In multiple surveys, eight of 10 physically active people claim that they are sick less often that their sedentary peers. I tested this assertion in three randomized, controlled exercise training studies and found that sick days were indeed reduced in women who walked briskly 5 days per week for 3545 minutes per bout. In the present study of 1,002 men and women ages 18 to 85, those exercising aerobically for 20 minutes of more per bout, 5 or more days per week, for 12 weeks during the winter or fall experienced a 43 percent reduction in the number of days sick with the common cold compared to those largely avoiding aerobic exercise. The strength of this study is the use of a validated instrument for measuring upper respiratory tract infections (URTI), the large and heterogeneous study population, and the sophisticated statistical modeling to adjust for potential confounders.

Britannica: How did you uncover the association between exercise and reduced risk of catching a cold, given the number of variables, such as diet and age, that can also determine one’s risk for colds?

Nieman: Subjects filled in an extensive lifestyle and demographic questionnaire, and then logged URTI symptoms each day for 12 weeks. We used a statistical model that selected and adjusted for the most important URTI-related lifestyle and demographic factors, and then ranked them for impact on the number of days with URTI and severity of symptoms. Of all lifestyle factors, aerobic exercise (5 or more days per week) was most powerful in lowering the number of days with URTI and symptom severity. No pill or supplement comes close to the cold-prevention power of aerobic activity, but time and effort are requisite. Most people claim they lack time to exercise, but the health benefits are numerous, including lowered risk for many chronic diseases, improved psychological health, and increased fitness and energy for the daily activities of life. The net effect is not loss of time but a longer life, fewer days of sickness, and enhanced work productivity.

Britannica: Is there an ideal amount or type of physical activity that provides optimal protection against the common cold? Are there any activities, such as exercising vigorously in cold weather, that might increase risk and that people should avoid?

Nieman: Our data support that optimal protection occurs with near daily aerobic activity. Every indication is that any type of aerobic activity (e.g., brisk walking, jogging, cycling, swimming, sports play, aerobic dance) will confer protection against the common cold and that air temperature and location (indoor vs. outdoor) are inconsequential. We do know that prolonged, intensive exercise beyond 90 minutes causes a strong increase in stress hormones, physiological stress, immune dysfunction, and an increase in URTI risk. For example, after running a competitive marathon, URTI risk increases 2- to 6-fold, depending on the time of year. So too much of a good thing (exercise) can turn around and become harmful in excessive amounts (similar to the effects of too much sun exposure).

Britannica: Much of your research in the past has centered on the ability of exercise to boost immune function. How does physical activity stimulate the immune system to defend against the common cold?

Nieman: Studies in our Human Performance Lab indicate that during each aerobic exercise bout, the recirculation of important immune cells (e.g., natural killer cells and neutrophils) is enhanced, improving “immunosurveillance” against viruses. This is a transient effect that lasts at most about three hours after the exercise bout. Long-term exercise training studies by my research team have shown that no chronic changes occur in resting immune function. Thus, the key point is that a high frequency of physical activity is needed to repeat the exercise-induced immune cell surges that over time add up to improved virus control and reduced illness.

Britannica: In addition to exercise, were you able to identify other factors associated with either a reduced risk for colds or reduced severity of symptoms?

Nieman: Other important correlates of reduced URTI included older age, male gender, being married, low mental stress, and high fruit intake (3 or more servings per day). Thus, if you are an older, married, mentally content, fruit-eating, and physically fit male, your predicted sick days this cold season are very low compared to others.

Sunday, January 23, 2011

Internet tablets

Archos 28 and 32 Internet tablets set to rock CES 2011

CES 2011 is just around the corner – can you feel the heat? Assuming you say no, well, here we are to help fan those flames into a raging fire, bringing you news of the Archos 28 and Archos 32, with both of them being Internet tablets that are powered by the Android operating system, with the intention of revolutionizing the MP3 and portable media player market at the same time thanks to the inclusion of Wi-Fi connectivity, all without breaking the bank. Boasting 8GB of internal memory, this ought to be more than enough for users to play and listen to more than 4,000 songs regardless of where they are at that point in time, while being able to take advantage of a range of Android-based applications that allow them to chat, game and partake in social networking – as long as there is an Internet connection, of course.


Both of the tablets mentioned are extremely pocket-friendly in terms of their size, with the Archos 28 measuring 3.9” x 2.1” x 0.35” while the Archos 32 is slightly larger at 4.1” x 2.1” x 0.3”, retailing for $99.99 and $149.99, respectively. Sounds pretty darn cheap to us, comparing it to the Apple iPad, but of course, you won’t be able to enjoy the iPad’s amazing App Store support, but there is a price to pay for everything, don’t you think so?

With the Archos 28 and 32, you can store, play and discover more music with free access to all the great music applications including the once popular Napster (which has since been revived). Users will be able to easily access music from these open-platform MP3 players at a comparable price to MP3 players with closed platforms. Archos will also throw in the Music Cover Carrousel, a built-in music application which enables users to browse through album covers and add widgets to the home screen, enjoying instant access to media controls.

Palro gets an upgrade, and a voice!

Some of you might remember when we covered Palro almost one year ago from February.
At the time, we reported that Palro had 20 joints, five microphones, voice recognition, mono speakers, 3 Megapixel camera, Wi-Fi IEE 802.11 b/g/n, gyro-sensor, three-axis acceleration sensor, eight pressure sensors on the feet, 1GB internal memory, a USB port, and an Intel Atom 1.6GHz processor.
I must also be quoted as saying that Palro as “a laptop with legs”. If that’s true, then the laptop has just been given an upgrade with an Atom Z530 processor, 4GB of flash storage, and an Ubuntu kernel inside.
Palro is now looking more like a robot from our science fiction dreams as he has learned how to speak. (Perhaps I should say “she” as the voice sounds female.) You can watch the video after the jump to see it for yourself.
Some of you might remember that the last time we discussed Palro, it talked about it being as a robot that you can program to do. Just picture what you can do if you can make it talk!
I believe that we reported last time on Palro that it costs about $3,300. According to my new sources, the price has apparently jumped up to $3,600.

Saturday, January 22, 2011

'Flasher Detection'

'Flasher Detection' Algorithm Aims to Clean Up Video Chat

Computer scientists have developed software that spots flashers in the act on video chat sites.
One of the more extraordinary trends in internet use has been the rapid rise of video chat services such as Chatroulette. These services randomly link the webcams of people who visit the site.
But Chatroulette has a problem. The site is dominated by flashers who expose their genitals.
Some 6.3 million visitors used Chatroulette in July 2010, perhaps because of the sexual nature of its content.
But this poses a significant threat to minors. There is no easy way to police the age of people who visit websites and minors can gain access easily. According to Xinyu Xing at the University of Colorado at Boulder and a few pals, a significant number of Chatroulette users appear to be minors.
"Our observations on a typical Saturday night indicate that as many as 20-30% of Chatroulette users are minors," they say in a paper published today on the arXiv.
Xing and co have a solution, however. This team has developed a "flasher detection" algorithm that spots the offenders, allowing them to be kicked out.
It turns out that catching flashers is harder than it might seem at first. One way is to employ a crowdsourcing mechanism in which users report offenders whose video feed can then be evaluated by trained individuals and stopped if necessary.
But that's a time consuming and expensive task that is open to abuse. And with upwards of 20,000 users at any time, it's unlikely to work for Chatroulette in the long run.
Another approach is to use existing algorithms designed to detect pornographic content. Exactly how these algorithms work isn't entirely clear, but they appear to look for skin content in images.
Unfortunately, this type of software does not work well with video chat content, say Xing and co. That's because the video images are often poorly lit making it hard to distinguish skin from yellowy-white walls in the background, for example.
So Xing and co have developed their own algorithm, called SafevChat which they've tested on some 20,000 stills taken from Chatroulette videos and supplied by the service's founder Andrey Ternovskiy. Their paper gives a detailed insight into how it works.
The new approach is interesting because it analyses the images using several different criteria and then fuses the results before deciding whether the image is acceptable or not.
To get over the problem of skin-coloured walls and furniture, they combine skin detection with motion detection that compares sequential frames to see whether the"skin" is moving. And they use face, eye and nose detectors to distinguish facial from non-facial skin. The results are fused and the image is then classified as normal or offensive, having been trained on the initial dataset.
Xing and so say it works well and significantly better than a commercial pornographic image detector programme called PicBlock.

In fact, SafeVChat works so well, that Chatroulette began using it on its website earlier this month.
Whether this makes Chatroulette more or less popular, we'll have to wait and see. But with any luck, it will make the site safer for all concerned.

Smart Power Management

There is a causal relationship between energy consumption and economic growth. Higher economic growth implies greater consumption of energy which in turn implies higher carbon emission. However, developing countries cannot stop their growth for minimiz­ing carbon emission, the primary reason why the 2010 Copenhagen discussions fell through. There is a need for new solu­tions that can enable growth without compromising environmental concerns.

We can save energy in three ways: By using the devices that consume less energy (for example, LED light); avoid energy wastage; and by enforcing policy that puts a cap on the energy usage by an individual or by an enterprise. The main issues which need to be resolved to achieve the last two techniques are to detect when and where energy is wasted and how to enforce a policy to limit con­sumption. The current electricity infra­structure does not have any provision for tracking energy wastage on individual scale. Moreover, the present infrastruc­ture is not integrated with information technology through which government could monitor and remotely control such wastage by using a device, such as a mobile phone, desktop computer, or any other handheld device connected to the Internet.
The smart power strip (SPS) solu­tion from Infosys is capable of accurately identifying when and where electricity is being wasted. It is even equipped to take action to eliminate such wastage. The SPS monitors a variety of ambiance parame­ters, such as movement, light, sound, and temperature to identify whether energy consumption can be avoided, and if so, it can turn off the relevant devices. Simi­larly, based on the temperature and the presence of natural light in an enclosed area, it can decide whether air condition­ing needs to be switched on or off.
A large area can be covered with mul­tiple such SPS units which can com­municate amongst one another using wireless sensor network technology and use multi-hop wireless routing to send data to a gateway that consolidates the information and forwards it to a portal on the Internet. The portal, which can be accessed from anywhere using any device, provides the interface to both monitor and control energy consumption. In addition, the devices can be scheduled to switch on or switch off at specific times.
Although researchers have developed the technology in a power strip, they say the same technology can be incorpo­rated inside the legacy switchboards as well. Thus, even legacy appliances can be connected to the consumption points, transforming them into smart appliances with monitoring and control capability just as their smart counterparts. Smart power strip can be used as a normal power strip in which various appliances can be connected.
A SPS comprises an electronic circuit to calculate the voltage and current consumed at each plug point. Algorithms are executed in a microcontroller to calcu­late the power factor of devices plugged into the sockets, enabling the calculation of electricity consumption in every plug point. Moreover, electricity consumption can be controlled by switching on or off the devices connected to the plug points using relays. The system has USB, Eth­ernet, and Zigbee interfaces to commu­nicate with the outside world.
The SPS can transform the existing electricity infrastructure into a software-controlled smart infrastructure. Various kinds of software applications can be built on it. For example, one can com­municate with the appliances connected to the smart power strip using instant messaging (IM) just as one would com­municate with an individual. One of the biggest advantages of SPS is its applica­tion in smart grid for supporting demand response mechanism. Currently, in a city environment when the demand of elec­tricity is more than the supply, the energy/utility service provider forces consum­ers in a neighborhood to shut down all of their electrical appliances, a phenom­enon commonly known as load shedding. Smart power strip enables selective load shedding, which means that only select appliances are turned off at the peak load times as chosen by consumers par­ticipating in the demand response (DR) program. This brings down the overall demand below the available supply as well as avoids complete blackout of a particu­lar area.

SMART SOLUTION Infosys has developed an iPhone application through which users can access and control smart power strip from anywhere, anytime.
Smart power strips can also be used in an enterprise’s smart space management program, whereby the enterprise can monitor the occupancy of cubicles and turn off the computer monitors, VOIP phones, and lights connected to their sockets when cubicles are not occupied. An enterprise level energy consumption policy can be imposed to ensure that every employee has a particular daily, weekly, and monthly energy budget. This can change the behavior of employees of an enterprise and avoid wastage of electricity.

INSIDE OUT
There are three components of a SPS: hardware, middleware, and the appli­cation. Each SPS has a potential trans­former (PT) to cut down the voltage consumption, a current transformer (CT) to bring down the current consump­tion, and a microcontroller to compute the power factor. Output signals from PT and CT are processed through signal conditioning circuit and are passed to the microcontroller which samples these sig­nals at 1 kilohertz and computes the root mean square voltage and current as well as real and apparent power factor. In addition, there are temperature, move­ment (PIR), sound (MIC), and light sen­sors. The data received from smart power strips is processed and analyzed at the central server which stores the history of each smart power strip.
The middleware component of SPS connects the device to the outside world. The middleware stack (named MoJo) converts a real wireless sensor-based net­work to a virtual network of Java objects. The MoJo platform exposes a Java-based application programming interface to developers so that they can write appli­cations without being aware of the func­tioning of the underlying wireless sensors. MoJo also ensures that power strips are networked together and enterprise level energy consumption policies can be applied. There are many APIs exposed for various functions on a power strip. For example, there are APIs for getting a list of power strips, renaming a power strip or an individual socket (or plug point), getting the current consumed by a power strip or an individual socket, getting various sensor values on the power strip, switching on and off individual socket, and more. Any software application can be built using such API.
Researchers at Infosys have devel­oped a Web-based application that can be accessed through Internet and an iPhone application to access SPS. Like an email or a Facebook account, the user will have power management account on the Internet where he or she can monitor and control the power being consumed in his or her house or office. If someone is present at home, the application allows SPS to use discretion whether or not to turn off appliances at home. The applica­tion also throws an alert when a critical appliance like a geyser is on for some time. One can then go to the control panel and turn off the geyser remotely.

CHALLENGES
Some of the technology challenges related to SPS are measurement of power, that is, computation of power factor by sampling current and voltage and measurement of voltage over a wide range of inputs rang­ing from 110-220 volts. Controlling the plug points is another challenge. Recep­tion of messages, interpreting them, and activating the relays; protection of appli­ances connected to plug points against surge in power supply; software fuse to cut off power even before the actual threshold is hit are key issues to deal with. Other challenges include dynamic cre­ation of wireless mesh network; multi-hop energy-aware routing; converting physical wireless motes to Java objects; collating, calibrating, filtering and aggre­gating data from multiple sensors; auto­matic detection of smart power strips as they are plugged on the wall socket and devices are plugged into the power strip; creating a visually appealing and easy-to-use intuitive Web-based interface; and developing a simple to use applica­tion for mobile handsets to monitor and control devices connected to the smart power strips.

ACCESSIBILITY iPhone app for smart power strip.
There are several economic challenges for the SPS depending on its application domain and market segment. If the smart power strips are targeted to residential users, then they must be inexpensive enough for consumers to get the return on investment in less than a year. For example, a five-socket power strip should be priced between $25 and $40 for mass market adoption. However, providing all the monitoring and control capabilities within that price point is tough. For the commercial segment, the economic ben­efit is more than the savings in electricity bill. Huge benefit entails from the fact that the smart power strips will enable enterprises to meet their target carbon footprint thereby helping them brand their corporation as “green” and “sustain­able”. Thus, corporations will be willing to pay higher premium than the residen­tial market.
When positioned for the energy or util­ities market, the economic challenge will be to incorporate smart demand response resulting in the energy or utility service providers giving better than existing rate to those customers that participate in DR program. We already see that there are intermediaries between the energy/util­ity service provider and the residential/commercial customers who are provid­ing this service by requesting consumers to turn off their non-essential appliances at the peak load time and compensating them for their support of the DR program.

The SPS solution has been piloted in Infosys campus. When a cubicle is not occupied for a duration of more than 10 minutes at a stretch, the light, phone and monitor connected to the smart power strip are switched off. This is done by running business logic in each power strip. The project helped Infosys gain significant savings in energy bill (approximately 10 percent) and reduc­tion of carbon footprint.



Friday, January 21, 2011

How do touch screens work?

http://www.esterline.com/Portals/14/SiteImages/TouchScreen.jpgTouch screen monitors — where you can use your finger on the computer screen to navigate through the contents — have become more and more commonplace over the past decade, particularly at public information kiosks. A basic touch screen has three main components: a touch sensor, a controller, and a software driver. The touch screen is an input device, so it needs to be combined with a display and a PC to make a complete touch input system.

The Touch Sensor has a textured coating across the glass face. This coating is sensitive to pressure and registers the location of the user's finger when it touches the screen. The controller is a small PC card that connects the touch sensor to the PC. It takes information from the touch sensor and translates it into information that PC can understand. The Software Driver is a software update for the PC system that allows the touchscreen and computer to work together. It tells the computer's operating system how to interpret the touch event information that is sent from the controller.

There are three basic systems that are used to recognise a person's touch — Resistive, Capacitive and Surface acoustic wave.

The resistive system consists of a normal glass panel that is covered with a conductive and a resistive metallic layer. These layers are held apart by spacers, and a scratch-resistant layer is placed on top of the whole set up. An electrical current runs through the two layers while the monitor is operational. When a user touches the screen, the two layers make contact in that spot. The change in electrical field is noted and coordinates of the point of contact are calculated. Once the coordinates are known, a special driver translates the touch into something that the operating system can understand, much as a computer mouse driver translates a mouse's movements into a click or drag.

In the capacitive system, a layer that stores electrical charge is placed on the glass panel of the monitor. When a user touches the monitor with his or her finger, some of the charge is transferred to the user, so the charge on the capacitive layer decreases. This decrease is measured in circuits located at each corner of the monitor. The computer calculates, from the relative differences in charge at each corner, exactly where the touch event took place and then relays that information to the touch screen driver software. One advantage of the capacitive system is that it transmits almost 90 per cent of the light from the monitor, whereas the resistive system only transmits about 75 per cent. This gives the capacitive system a much clearer picture than the resistive system.

The surface acoustic wave system uses two transducers (one receiving and one sending) placed along the x and y axes of the monitor's glass plate. Also placed on the glass are reflectors — they reflect an electrical signal sent from one transducer to the other. The receiving transducer is able to tell if the wave has been disturbed by a touch event at any instant, and can locate it accordingly. The wave setup has no metallic layers on the screen, allowing for 100-percent light throughput and perfect image clarity. This makes the surface acoustic wave system best for displaying detailed graphics (both other systems have significant degradation in clarity).

Another area in which the systems differ is which stimuli will register as a touch event. A resistive system registers a touch as long as the two layers make contact, which means that it doesn't matter if you touch it with your finger or a rubber ball. A capacitive system, on the other hand, must have a conductive input, usually your finger, in order to register a touch. The surface acoustic wave system works much like the resistive system, allowing a touch with almost any object — except hard and small objects like a pen tip.