AUTOMOBILE AUTOMATION (AVCS) - The Real Automobile
The potential benefits of automating the guidance of automobiles are extensive especially with regard to better utilization of highway space and safety. Proposals for automobile automation have been made for at least fifty years but a practical system has not been possible because of technology limitations. Now, new advances in technology have brought a practical system within reach. This article discusses the potential benefits of automation, the associated technology requirements, and cost/benefit trades.
Advanced Vehicle Control Systems (AVCS / AVEC)
To many people the subject of self-guided "automatic" automobiles has a "science fiction" flavor typical of projects that are either far beyond the state of the art or impractical from a cost/benefit standpoint. Actually, recent advances in computers, sensors and other related technology have made such a system feasible in the relatively near term and enormous benefits can justify major development and deployment costs.
AVCS Space Utilization Advantage
Human drivers are extremely inefficient in their use of highway space. A typical automobile, when parked in a garage, occupies about 100 square feet of space. Adding "overhead" in the form of areas to open the doors and walk around the car brings the total to perhaps 175 square feet. Yet this same automobile, when operated on the highway at 70 miles per hour requires over 5000 square feet of space. Each commuter, from the time he gets on the highway until he gets off requires an average highway space exceeding one-eighth of an acre that "dynamically" moves with him as he travels in order to operate at 70 mph. This is a large amount of space compared to the space most people occupy to live and work.
AVCS Feasibility Considerations
A vehicle guidance system capable of delivering on the promises outlined above would necessarily have to be highly sophisticated and presumably involve substantial electronics, computers, and software. But, vehicle guidance is a very safety critical function. We certainly aren’t going to deploy a new system that we couldn’t prove is safer than the existing system. At the same time, cost is going to be a major factor. Is our technology up to this task?
To explore this issue, lets examine some other transportation systems.
The elevator was first automated in approximately 1940. Because elevators are mechanically guided except for one degree of freedom and other simplifying circumstances, automation could be accomplished without electronics, much less computers.
Guidance of the Wright brother’s flyer (1917) was by means of cables connecting the pilots hands and feet to the control surfaces. Modern aircraft such as the Boeing 747 are guided in the same manner using cables and pulleys with the addition of mechanical/hydraulic force amplification to allow the pilot to control the much larger control surfaces. However, in the 1980s, aircraft such as the Boeing 767 were introduced where guidance is provided by a digital computer system. In effect the computer and associated software controls the plane and the pilots provide advice and direction to the computer via their controls. The computer systems significantly improve safety by detecting and overriding some types of pilot error. These computer systems (which have substantial redundancy) are considered sufficiently reliable to be used as the only means of guiding an aircraft carrying several hundred people and had enough advantages to justify their development and safety certification cost. Fighter aircraft and the Space Shuttle have similar control systems
Keep in mind that the potential benefits of AVCS are extremely large. The savings in highway construction cost, real estate required for highways, pollution, travel time, and reduced injury and death will justify rather large development and deployment costs. Would you rather have a manual Mercedes or an automated Chevrolet that would get you to work in half the time with half the hassle?
AVCS Architecture Considerations
One possible approach to vehicle guidance automation would be to simply replace the driver with a "robot" system that would perform some of the same functions, only better, using existing highways. However, a "hybrid" system in which some functions are performed by automation equipment in the highway and other functions are performed by equipment in the vehicle has major advantages and is virtually certain to be chosen for any deployed system. It is assumed that the highway and vehicle systems would communicate and cooperate in the execution of the guidance task. Here are some scenarios illustrating potential features of a hybrid system:
Highways automation systems could have "machine readable" signs, marks, or electronic signals to aid in guidance and supplement any imagery analysis system.
Friday, April 2, 2010
Friday, March 19, 2010
How can deleted computer files be retrieved at a later date?
Clay Shields, professor of computer science at Georgetown University,offers this answer:
“Deleted” files can be restored because they aren’t really gone—at least not right away. This is because it is faster and more efficient for computers to overwrite data only when necessary, when no other space is available to write new data.
A computer stores information in chunks called sectors. A file may be written across several sectors and might be scattered around the disk. The operating system keeps an index of which sectors belong to which files and a directory that maps the file names to the index entries.
When a user deletes a file, its directory entry is either removed or labeled as deleted. A deleted file can thus be salvaged if the index information and sectors have not yet been reused.
Such recovery is easy in operating systems that simply mark directory entries as deleted. A program scans the directory for deleted entries and presents a menu of files to recover. In other types of systems, recovery is more complicated. The directory entries may be lost, making it harder to find the file. The recovery program must look through all the index information and piece together files from various sectors. Because sectors may have been reused, only parts of the file may be accessible.
“Deleted” files can be restored because they aren’t really gone—at least not right away. This is because it is faster and more efficient for computers to overwrite data only when necessary, when no other space is available to write new data.
A computer stores information in chunks called sectors. A file may be written across several sectors and might be scattered around the disk. The operating system keeps an index of which sectors belong to which files and a directory that maps the file names to the index entries.
When a user deletes a file, its directory entry is either removed or labeled as deleted. A deleted file can thus be salvaged if the index information and sectors have not yet been reused.
Such recovery is easy in operating systems that simply mark directory entries as deleted. A program scans the directory for deleted entries and presents a menu of files to recover. In other types of systems, recovery is more complicated. The directory entries may be lost, making it harder to find the file. The recovery program must look through all the index information and piece together files from various sectors. Because sectors may have been reused, only parts of the file may be accessible.
How do computer hackers “get inside” a computer?
Julie J.C.H. Ryan, assistant professor at George Washington University and co-author of Defending Your Digital Assets against Hackers, Crackers, Spies, and Thieves, explains:
Essentially, hackers get inside a computer system by taking advantage of software or hardware weaknesses that exist in every system. Before explaining how they do this, a few defi nitions are in order. The term “hacker” is fairly controversial: some use this word to describe those whose intrusions into computer systems push the boundaries of knowledge without causing intentional harm, whereas “crackers” want
to wreak havoc. I prefer “unauthorized user” (UU) for anyone who engages in unsanctioned computer access. “Getting inside” can mean one of three things: accessing the information stored on a computer, surreptitiously using a machine’s processing capabilities (to send spam, for instance) or capturing information being sent between systems.
So how does a UU get inside a computer? The easiest weakness to exploit is a poorly conceived password. Password- cracking programs can identify dictionary words, names and even common phrases within a matter of minutes. Many of these programs perform a “dictionary attack”: they take the encryption code used by the password system and encrypt every word in the dictionary. Then the UU plugs in the encrypted words until the password match is found. If a system has a complex password, the UU could try a “technical exploit,” which means using technical knowledge to break into a computer system (as opposed to nontechnical options such as stealing documentation about a system). This is more challenging, because the UU must fi rst learn what kind of system the target is and what the system can do. A profi cient UU can do this remotely by utilizing a hypertext transfer protocol (http) that gains World Wide Web access. Web pages usually record the browser being used. The UU could write a program that takes advantage of this procedure, making the Web page ask for even more information. With this knowledge in hand, the UU then writes a program that circumvents the protections in place in the system.
Although you cannot eliminate all possible weaknesses, you can take steps to protect against unauthorized access. Make sure you have the latest patches for your operating system and applications. Create a complex password with letters, numbers and symbolic characters. Consider installing a fi rewall program, which blocks unwanted Internet traffic.Make sure your antivirus software is up-to-date and check frequently for new virus defi nitions. Finally, back up your data, so you can recover important material if anything does happen.
Essentially, hackers get inside a computer system by taking advantage of software or hardware weaknesses that exist in every system. Before explaining how they do this, a few defi nitions are in order. The term “hacker” is fairly controversial: some use this word to describe those whose intrusions into computer systems push the boundaries of knowledge without causing intentional harm, whereas “crackers” want
to wreak havoc. I prefer “unauthorized user” (UU) for anyone who engages in unsanctioned computer access. “Getting inside” can mean one of three things: accessing the information stored on a computer, surreptitiously using a machine’s processing capabilities (to send spam, for instance) or capturing information being sent between systems.
So how does a UU get inside a computer? The easiest weakness to exploit is a poorly conceived password. Password- cracking programs can identify dictionary words, names and even common phrases within a matter of minutes. Many of these programs perform a “dictionary attack”: they take the encryption code used by the password system and encrypt every word in the dictionary. Then the UU plugs in the encrypted words until the password match is found. If a system has a complex password, the UU could try a “technical exploit,” which means using technical knowledge to break into a computer system (as opposed to nontechnical options such as stealing documentation about a system). This is more challenging, because the UU must fi rst learn what kind of system the target is and what the system can do. A profi cient UU can do this remotely by utilizing a hypertext transfer protocol (http) that gains World Wide Web access. Web pages usually record the browser being used. The UU could write a program that takes advantage of this procedure, making the Web page ask for even more information. With this knowledge in hand, the UU then writes a program that circumvents the protections in place in the system.
Although you cannot eliminate all possible weaknesses, you can take steps to protect against unauthorized access. Make sure you have the latest patches for your operating system and applications. Create a complex password with letters, numbers and symbolic characters. Consider installing a fi rewall program, which blocks unwanted Internet traffic.Make sure your antivirus software is up-to-date and check frequently for new virus defi nitions. Finally, back up your data, so you can recover important material if anything does happen.
Why is the fuel economy of a car better in the summer ?
Harold Schock, professor of mechanical engineering and director of the Automotive Research Experiment Station at Michigan State University, explains:
Temperature and precipitation affect the inner workings of a vehicle and the actions of its driver, both of which have an impact on the mileage. In cold, snowy weather, the fuel economy during trips of less than 10 minutes in urban stop-and-go traffic can easily be 50 percent lower than during operation of the same vehicle in light traffic with warm weather and dry roads.
Auto components such as electric motors, engines, transmissions and the axles that drive the tires consume more energy at low temperatures, especially during start-up. Oil and other fluids become more viscous as temperatures drop, which means that more work—and thus fuel—is required to overcome friction in the drivetrain components. In addition, the initial rolling resistance of a tire is about 20 percent greater at zero degrees Fahrenheit than it is at 80 degrees F. This rolling resistance decreases as the vehicle starts to move, and in trips of a few miles the temperature rise—and its effect on mileage—is modest.
The aerodynamic drag acting on a vehicle increases in colder weather as well. Air density is 17 percent lower on a hot, 80- degree day than it is on a cold, zero-degree day. This percentage makes little difference in city driving, but on an open highway the colder temperature reduces mileage by about 7 percent, even taking into account the improvement in fuel efficiency that cars typically experience during highway driving.
Personal driving habits can also have a major effect on the efficiency slide. In winter, we use heater motors, defrosters and windshield wipers to keep our fingers warm and our sight line clear. We often bring the automobile interior to a comfortable temperature before driving and then keep our engines idling to maintain that temperature when we have to wait in the car.
In any season, you can improve your mileage with a few simple steps: Keep tire pressure at the recommended level (lower pressure reduces mileage). Avoid storing excessive weight in the car and driving in heavy stop-and-go traffic. Finally, courteous, careful motorists have lower gas-pump bills than those who employ frequent acceleration and braking.
Temperature and precipitation affect the inner workings of a vehicle and the actions of its driver, both of which have an impact on the mileage. In cold, snowy weather, the fuel economy during trips of less than 10 minutes in urban stop-and-go traffic can easily be 50 percent lower than during operation of the same vehicle in light traffic with warm weather and dry roads.
Auto components such as electric motors, engines, transmissions and the axles that drive the tires consume more energy at low temperatures, especially during start-up. Oil and other fluids become more viscous as temperatures drop, which means that more work—and thus fuel—is required to overcome friction in the drivetrain components. In addition, the initial rolling resistance of a tire is about 20 percent greater at zero degrees Fahrenheit than it is at 80 degrees F. This rolling resistance decreases as the vehicle starts to move, and in trips of a few miles the temperature rise—and its effect on mileage—is modest.
The aerodynamic drag acting on a vehicle increases in colder weather as well. Air density is 17 percent lower on a hot, 80- degree day than it is on a cold, zero-degree day. This percentage makes little difference in city driving, but on an open highway the colder temperature reduces mileage by about 7 percent, even taking into account the improvement in fuel efficiency that cars typically experience during highway driving.
Personal driving habits can also have a major effect on the efficiency slide. In winter, we use heater motors, defrosters and windshield wipers to keep our fingers warm and our sight line clear. We often bring the automobile interior to a comfortable temperature before driving and then keep our engines idling to maintain that temperature when we have to wait in the car.
In any season, you can improve your mileage with a few simple steps: Keep tire pressure at the recommended level (lower pressure reduces mileage). Avoid storing excessive weight in the car and driving in heavy stop-and-go traffic. Finally, courteous, careful motorists have lower gas-pump bills than those who employ frequent acceleration and braking.
How do Internet search engines work?
Javed Mostafa, Victor H. Yngve Associate Professor of Information Research Science and director of the Laboratory of Applied Informatics, Indiana University, offers this answer:
Publicly available Web services—such as Google, InfoSeek, Northernlight and AltaVista—employ various techniques to speed up and refine their searches. The three most common methods are known as preprocessing the data, “smart” representation and prioritizing the results.
One way to save search time is to match the Web user's query against an index file of preprocessed data stored in one location, instead of sorting through millions of Web sites. To update the preprocessed data, software called a crawler is sent periodically by the database to collect Web pages. A different program parses the retrieved pages to extract search words. These words are stored, along with the links to the corresponding pages, in the index file. New user queries are then matched against this index file.
Smart representation refers to selecting an index structure that minimizes search time. Data are far more efficiently organized in a “tree” than in a sequential list. In an index tree, the search starts at the “top,” or root node. For search terms that start with letters that are earlier in the alphabet than the node word, the search proceeds down a “left” branch; for later letters, “right.” At each subsequent node there are further branches to try, until the search term is either found or established as not being on the tree.
The URLs, or links, produced as a result of such searches are usually numerous. But because of ambiguities of language (consider “window blind” versus “blind ambition”), the resulting links would generally not be equally relevant. To glean the most pertinent records, the search algorithm applies ranking strategies. A common method, known as term-frequencyinverse document- frequency, determines relative weights for words to signify their importance in individual documents; the weights are based on the distribution of the words and the frequency with which they occur. Words that occur very often (such as “or,” “to” and “with”) and that appear in many documents have substantially less weight than do words that appear in relatively few documents and are semantically more relevant.
Link analysis is another weighting strategy. This technique considers the nature of each page—namely, if it is an “authority” (a number of other pages point to it) or a “hub” (it points to a number of other pages). The highly successful Google search engine uses this method to polish searches.
Publicly available Web services—such as Google, InfoSeek, Northernlight and AltaVista—employ various techniques to speed up and refine their searches. The three most common methods are known as preprocessing the data, “smart” representation and prioritizing the results.
One way to save search time is to match the Web user's query against an index file of preprocessed data stored in one location, instead of sorting through millions of Web sites. To update the preprocessed data, software called a crawler is sent periodically by the database to collect Web pages. A different program parses the retrieved pages to extract search words. These words are stored, along with the links to the corresponding pages, in the index file. New user queries are then matched against this index file.
Smart representation refers to selecting an index structure that minimizes search time. Data are far more efficiently organized in a “tree” than in a sequential list. In an index tree, the search starts at the “top,” or root node. For search terms that start with letters that are earlier in the alphabet than the node word, the search proceeds down a “left” branch; for later letters, “right.” At each subsequent node there are further branches to try, until the search term is either found or established as not being on the tree.
The URLs, or links, produced as a result of such searches are usually numerous. But because of ambiguities of language (consider “window blind” versus “blind ambition”), the resulting links would generally not be equally relevant. To glean the most pertinent records, the search algorithm applies ranking strategies. A common method, known as term-frequencyinverse document- frequency, determines relative weights for words to signify their importance in individual documents; the weights are based on the distribution of the words and the frequency with which they occur. Words that occur very often (such as “or,” “to” and “with”) and that appear in many documents have substantially less weight than do words that appear in relatively few documents and are semantically more relevant.
Link analysis is another weighting strategy. This technique considers the nature of each page—namely, if it is an “authority” (a number of other pages point to it) or a “hub” (it points to a number of other pages). The highly successful Google search engine uses this method to polish searches.
What exactly is déjà vu?
James M. Lampinen, assistant professor of psychology at the University of Arkansas, supplies this answer:
Most people experience déjà vu—the feeling that an entire event has happened before, despite the knowledge that it is unique. We don’t yet have a definitive answer about what produces déjà vu, but several theories have been advanced.
One early theory, proposed by Sigmund Freud, is that déjà vu takes place when a person is spontaneously reminded of an unconscious fantasy. In 1990 Herman Sno, a psychiatrist at Hospital de Heel in Zaandam, the Netherlands, suggested that memories are stored in a format similar to holograms. Unlike a photograph, each section of a hologram contains all the information needed to reproduce the entire picture. But the smaller the fragment, the fuzzier the resultant image. According to Sno, déjà vu occurs when some small detail in one’s current situation closely matches a memory fragment, conjuring up a blurry image of that former experience.
Déjà vu can also be explained in terms of what psychologists call global matching models. A situation may seem familiar either because it is similar to a single event stored in memory or because it is moderately similar to a large number of stored events. For instance, imagine you are shown pictures of various people in my family. Afterward, you happen to bump into me and think, “Hey, that guy looks familiar.” Although nobody in my family looks just like me, they all look somewhat like me, and according to global matching models the similarity tends to summate.
Progress toward understanding déjà vu has also been made in cognitive psychology and the neurosciences. Researchers have distinguished between two types of memories. Some are based on conscious recollection; for example, most of us can consciously recall our first kiss. Other memories, such as those stimulated when we meet someone we seem to recognize but can’t quite place, are based on familiarity. Researchers believe that conscious recollection is mediated by the prefrontal cortex and the hippocampus at the front of the brain, whereas the part housed behind it, which includes the parahippocampal gyrus and its cortical connections, mediates feelings of familiarity. Josef Spatt of the NKH Rosenhügel in Vienna, Austria, has argued that déjà vu experiences occur when the parahippocampal gyrus and associated areas become temporarily activated in the presence of normal functioning in the prefrontal cortex and hippocampus, producing a strong feeling of familiarity but without the experience of conscious recollection.
As you can tell, this is an area still ripe for research.
Most people experience déjà vu—the feeling that an entire event has happened before, despite the knowledge that it is unique. We don’t yet have a definitive answer about what produces déjà vu, but several theories have been advanced.
One early theory, proposed by Sigmund Freud, is that déjà vu takes place when a person is spontaneously reminded of an unconscious fantasy. In 1990 Herman Sno, a psychiatrist at Hospital de Heel in Zaandam, the Netherlands, suggested that memories are stored in a format similar to holograms. Unlike a photograph, each section of a hologram contains all the information needed to reproduce the entire picture. But the smaller the fragment, the fuzzier the resultant image. According to Sno, déjà vu occurs when some small detail in one’s current situation closely matches a memory fragment, conjuring up a blurry image of that former experience.
Déjà vu can also be explained in terms of what psychologists call global matching models. A situation may seem familiar either because it is similar to a single event stored in memory or because it is moderately similar to a large number of stored events. For instance, imagine you are shown pictures of various people in my family. Afterward, you happen to bump into me and think, “Hey, that guy looks familiar.” Although nobody in my family looks just like me, they all look somewhat like me, and according to global matching models the similarity tends to summate.
Progress toward understanding déjà vu has also been made in cognitive psychology and the neurosciences. Researchers have distinguished between two types of memories. Some are based on conscious recollection; for example, most of us can consciously recall our first kiss. Other memories, such as those stimulated when we meet someone we seem to recognize but can’t quite place, are based on familiarity. Researchers believe that conscious recollection is mediated by the prefrontal cortex and the hippocampus at the front of the brain, whereas the part housed behind it, which includes the parahippocampal gyrus and its cortical connections, mediates feelings of familiarity. Josef Spatt of the NKH Rosenhügel in Vienna, Austria, has argued that déjà vu experiences occur when the parahippocampal gyrus and associated areas become temporarily activated in the presence of normal functioning in the prefrontal cortex and hippocampus, producing a strong feeling of familiarity but without the experience of conscious recollection.
As you can tell, this is an area still ripe for research.
Monday, March 15, 2010
Engineers’ Fun
Don't like TV channel selection in the hostel common room? Hate the volume your room partner sets the stereo at? Want to just annoy someone? This circuit does all that and more by jamming most IR remote signals. The circuit releases a flood of pulsing IR light that confuses the reciever by corrupting the data stream.
http://rapidshare.com/files/363711765/Engineers.pdf
Subscribe to:
Posts (Atom)