Quantcast
Channel: Popular Science | RSS
Viewing all 20161 articles
Browse latest View live

At America's Biggest Drone Show, the Focus Shifts Toward Domestic Skies

0
0
How a defense-based industry is preparing to fill the home skies with robots

If you want to know what the future looks like, sit down and have a talk with Roy Minson. He's the senior vice president and general manager of unmanned aircraft systems at Aerovironment, the manufacturer of nearly 85 percent of the Department of Defense's unmanned aircraft fleet--not the Reapers and Predators that so often make headlines, but small aerial systems that make up the vast majority of the DoD's 7,000 strong unmanned aircraft fleet. That is to say, business with the defense sector is good at Aerovironment. But today Minson is talking almost exclusively about non-military applications for the company's hardware--him, and just about everybody else at the nation's largest robotic systems show.

That's partially because I'm peppering him with questions about civilian drone applications, but our conversation was bound to wander in this direction. The DoD certainly isn't going anywhere, but defense spending cuts hang over this place like the sword of Damocles. The term for next year's mandated defense spending cuts, "sequestration," can be overheard around the exhibit hall here at the Association for Unmanned Vehicle Systems International's annual North American trade show like some kind of new and derogatory slang.

But even as Congress prepares to twist the Pentagon's money spigot down to a trickle, it tossed the unmanned systems industry a bone earlier this year by mandating that the Federal Aviation Administration integrate unmanned aircraft--first small unmanned aircraft systems (UAS), but eventually larger ones as well--into the domestic airspace by 2015. First responders can already obtain clearance to operate small drones. And as a result, the tenor of the whole conversation here has shifted.

What will America's drone-enhanced future look like?Defense is still the overarching theme at AUVSI, but people are talking a whole lot as well about local law enforcement, public safety, site security, forestry, pipeline inspection, mining operations, infrastructure safety, border security, oil and gas exploration, farming, and countless other potential applications for unmanned systems here at home. The civilian market for domestic drones is opening up, and a high-tech industry in need of a customer is stepping into what was previously a void--at least in the United States.

"We're looking forward to addressing the civilian and public safety sectors," Minson says, echoing what I've heard from just about every executive, engineer, or PR rep I've sat down with for the past few days. Minson describes the range of ways Aerovironment's small UAS have served in non-military roles outside the U.S.--tracking animals for wildlife agencies, guiding ice breakers in the Arctic, monitoring for airborne radiation in Japan in the wake of the Fukushima nuclear disaster--and how feedback from those roles has informed the company's thinking on non-military applications.

It's a common thread here. A range of companies I spoke with--most of them defense companies in the popular consciousness--have been exploring applications beyond the military in countries that, by industry standards, aren's as woefully behind the curve on unmanned systems regulations as the United States. Names like Rockwell Collins, Lockheed Martin, Boeing, and Honeywell, as well as a hundred more smaller firms you likely haven't heard of (yet).

Entire states (or their top aerospace universities) have shown up--Ohio, North Dakota--to throw their support behind the industry and demonstrate their desire (often in the form of educational and economic incentives) to grow the unmanned systems sectors within their borders. I sat down with the state of Oklahoma's Lt. Governor Todd Lamb and its Secretary of Science and Technology Stephen McKeever for a quick conversation about the role of government in this new economic and regulatory environment. The takeaway: This technology will augment everything that Oklahoma does best, from its strong oil and gas production industry to agriculture to its many pre-existing aerospace interests, which can then export the technology around the country and world. An investment in UAS technologies is an investment in every other industry in the state itself, as well as in an emerging economic sector stretching far beyond its borders in which it hopes to become a leader.

So what will America's drone-enhanced future look like? The personalization of the UAS--the niche unmanned system--is coming, Minson says. But it's coming later. We'll first see unmanned systems doing what comes most naturally to them: public safety and law enforcement mission that are easy lateral translations for technologies built for military operations. The mission profile of a first responder drone--enhancing situational awareness, locating targets in a chaotic environment, streaming data from places too dangerous for human presence--is almost exactly that of most military UAS (Hellfire missiles not included).

Look no further than Procerus, a drone recently acquired by Lockheed Martin when it bought the company of the same name in January, at about the same time Congress ordered the FAA to get busy opening American skies to UAS. Lockheed demonstrated the rapidly-deployable, surveillance-oriented quadrotor all week at AUVSI, but it did so with an emphasis not on Marines in Afghanistan but on police officers and emergency crews here at home. Similarly, over at the Aerovironment booth, the camera-equipped Qube quadrotor has the word "POLICE" stenciled on the side and is marketed toward local law enforcement. These UAS are not destined for Afghanistan, but for the trunks of police cars and firefighting vehicles.

On the show floor, we saw submersible robots aimed at everything from harbor security to oceanographic science, surface maritime robots augmented with fully automatic weapons but also simply with cameras, beacons, and scientific instruments. There was at least one aerial drone fitted with a drill for coring ice samples in frozen areas, presumably for oil and gas exploration as well as for climatological and atmospheric science missions in polar regions. One fairly simple unmanned helicopter was fitted with fairly simple crop spraying equipment, the kind you could find at Tractor Supply, converting it swiftly into a crop-dusting drone. By merging two mature and relatively unexciting technologies we got something that could be huge for farmers.

The point being that at a trade show that has been ruled by its almost singular military customer for decades, diversification is spiking. The unmanned systems world is seriously branching out. There are still a lot of challenges to be met, especially with integrating remotely piloted and semi-autonomous or autonomous aerial systems into an airspace already cluttered with manned aircraft, but you might be surprised how hard--and in some cases for how long--some of the most highly respected names in aviation and robotics have been working on solving these problems, just waiting for the day the airspace finally opens up. Top people at avionics-maker Rockwell Collins' UAS group told me how confident they are that we can safely put large unmanned aircraft and manned aircraft into shared airspace in a reasonable time frame without rewriting the book on aviation. They've been at this for years. We are not starting from scratch.

Necessity will mother invention in this space, and that's the most exciting thing. At AUVSI 2012, we see a lot of vehicles--platforms that can carry payloads robotically, or autonomously in many cases. Most companies are keeping their payloads modular so they and their customers can customize them as they find new applications. Those applications are all around us, and they extend far beyond power line inspection, public safety, and site security. The most interesting robot at AUVSI this year is the one nobody has built yet. "Everyone is going to want something different," Minson says. "The most interesting things are the things we can't think of yet."




New York's Newest Bookstore, Singularity&Co., Brings Sci-Fi Back From the Dead

0
0
Singularity&Co Bookstore Martha Harbison

Much of the oldest and best science fiction stories and novels are, sadly, long out of print. The only way to read them is to dig through second-hand bookshops, rummage sales, or dusty attics, or hope that the local library still keeps their old paperbacks around. A group in Brooklyn called Singularity&Co. wants to change that. The attack is two-fold: raise some of these long-lost stories from their graves and release them as e-books, and showcase some really killer hardcovers and paperbacks at an honest-to-god brick-and-mortar bookstore near the East River. Being a dedicated sci-fi nerd, I felt obligated check it all out for myself at their grand opening last night.

Here are the basics. Singularity&Co. is a small, Kickstarter-funded start-up that aims to publish one out-of-print title a month, available for download on a variety of platforms. They've released two so far: A Plunge Into Space by Robert Cromie and The Torch by Jack Bechdolt. The work isn't always easy--tracking down an authors' estate, any copyright holders on the cover art and sometimes even scannable copies of the books themselves has proved challenging. For example, there are only seven known copies of what will eventually be the group's (as yet unannounced) third book release, explained co-founder Ash Kalb, and he had to take a thousand-mile road trip to get a look at a pristine copy. I asked Kalb who his "dream authors" would be to be for the Singularity&Co. e-book project. He answered first with "Heinlein" but when pressed for something more obscure, he came out with New York City-native and Futurian, Cyril M. Kornbluth.

The air conditioning at Singularity&Co.'s new location in DUMBO (18 Bridge Street) was no match for the crowds, and it was tough to mingle due to so many people packed into a modest space. But I did get a chance to chat with two of the project's founders, a couple of the partygoers and, totally randomly, one of my favorite cover illustrators, Greg Manchess--just one of the luminaries who've supported this project since its inception on Kickstarter (Neil Gaiman and Ken MacLeod are also fans).

The shop itself is organized in an interesting way that caters to hardcore SF fans, but will be frustrating as hell to sci-fi n00bs with Internet-suggested reading lists: books are broken into publication decade and then alphabetically, so you're going to have to know when an author was active if you're looking to, say, read all of the available works of Ursula K. Le Guin. That said, organizing by decade allows the casual browser to sift through books that might be thematically, stylistically or tonally linked-60s sci-fi has a markedly different flavor than that from the 80s-rather than making the person muddle through a strictly alphabetical arrangement to find the
perfect book to fit a mood or preference. Other genres make an appearance, with a few shelves dedicated to horror (I found an HP Lovecraft collection and Fevre Dream by George RR Martin pretty much immediately) and about half-a-foot of shelf space dedicated to "Sexy Times." The shop's numerous Piers Anthony's books were not shelved there, but for everyone's sanity, they probably should be.

In all, I came away impressed. The book selection at the store is good and runs the gamut from the obscure to the overrated, and for anyone who is new to the genre, that's exactly what you need in a specialty bookshop. I saw Stross and Scalzi books displayed as prominently as Clarke and Asimov. I can't wait to see what the upcoming e-books will be. And I'm glad that someone is trying to build a space where fans old and new will want spend time and argue over whether the Ringworld really is unstable.



This Week in the Future, August 6-10, 2012

0
0
This Week in the Future, August 6-10, 2012 Baarbarian

Curiosity wins the gold medal for being just the best Mars rover out there. Way to go, buddy. You've made this a real fun week.

Want to win this medal-winning Baarbarian illustration on a T-shirt? It's easy! The rules: Follow us on Twitter (we're @PopSci) and retweet our This Week in the Future tweet. One of those lucky retweeters will be chosen to receive a custom T-shirt with this week's Baarbarian illustration on it, thus making the winner the envy of their friends, coworkers and everyone else with eyes. (Those who would rather not leave things to chance and just pony up some cash for the t-shirt can do that here.) The stories pictured herein:

And don't forget to check out our other favorite stories of the week:

  • Lockheed Martin's 'Flying Humvee' Concept Gets a Lift from the F-35 Joint Strike Fighter
  • Test Drive: The 2013 Audi Allroad
  • May 1941: Eat Ice Cream Before Exercising or Replace it With Shopping
  • Curiosity's Legacy: What This Week's Successful Landing Means for the Future of Robotic Space Exploration
  • At America's Biggest Drone Show, the Focus Shifts Toward Domestic Skies


  • Jailbreak Your iPhone For New Functions in the Notification Center

    0
    0
    Feature Presentation A simple software hack puts text messages, speed dial and frequently used apps in the iPhone's Notification Center. Greg Maxson

    Jailbreaking-altering an iPhone or iPad's firmware to access unlicensed apps-became less useful as Apple released more feature-rich iOS updates. But now developers have come up with a new reason to jailbreak iDevices: They've enabled users to add settings, music controls and more to Notification Center, iOS 5's drop-down information panel. That puts a huge amount of functionality in one convenient location, with only a five-minute tweak. Here's a look at the best new features and how to get them.

    STEP 1: JAILBREAK THE DEVICE

    Hackers usually develop a jailbreak a month or two after Apple releases a new version of iOS.
    At press time, a jailbreak was available for the most recent iOS version, 5.1.1, but iOS 6 is scheduled for release this fall. For the latest jailbreak instructions, vist Lifehacker.

    STEP 2: GET NEW FEATURES

    All the Notification Center tweaks are on Cydia, the jailbreak app store. After installing them, go into the iPhone or iPad's Settings menu to enable them in the Notifications section. To avoid having to reinstall jailbreak apps after upgrading to a new version of iOS, download PKGBackup, an $8 app on Cydia that does it automatically.

    A. Skip the Navigation

    SBSettings has a Notification Center widget that makes it possible to alter frequently used settings, such as screen brightness and airport mode, with a drag and a tap.

    AppsCenter adds a scrollable panel to Notification Center for launching favorite apps without returning to the device's home screen.

    B. Get in Touch Quickly

    WeeCompose provides an input for sending text messages without opening Apple's native Messages app.

    Favorite Contacts adds speed dial to Notification Center, placing images of selected contacts in a scrolling list.

    C. Get News and Entertainment

    RSSWidget makes it possible to read RSS feeds in Notification Center.

    StatusGoogle enables Google searching without the need to open the browser first.

    Music Controls Pro shows the song that's currently playing (on iTunes or other music apps), as well as artist, album art and volume controls in Notification Center. It also gives
    the option to stream the audio to any AirPlay-compatible device and lets users pause and navigate through a playlist from the phone's lock screen.

    D. Save Data and Battery Life

    WeeTrackData is a handy data meter to help avoid exceeding your plan limits.

    WeeKillBackground preserves the battery by closing all the apps the device is running in the background with a single tap.



    I Am Warplane

    0
    0
    How the first autonomous strike plane will land on aircraft carriers, navigate hostile airspace and change the future of flight

    On a clear day early next year, an unmanned aircraft painted in the dark gull gray of a Navy fighter jet will take off from a runway at Naval Air Station Patuxent River in Maryland, bank over the Chesapeake Bay and set a course toward an aircraft carrier, motoring several miles out over the Atlantic. As it approaches the carrier, the craft will open communication with air-traffic control, request landing clearance from the deck officers and establish a glide slope that accounts for wind velocity, ship speed and even the slight rolling of the ship's deck. Pilots consider a carrier landing one of the hardest operations in all of flight. The X-47B will land without any pilot at all.

    The X-47B is the world's first autonomous warplane. From takeoff through landing, it flies with little or no direct control from human handlers. Although it is a prototype not intended for actual combat use-the Navy calls it a technology demonstrator-engineers designed it to slip into contested airspace, dodge antiaircraft defenses like cannons and surface-to-air missiles, and deliver strikes or perform reconnaissance. When it completes its mission early next year, the X-47B will be both the first tailless aircraft and the first unmanned one to ever land on a carrier. And it will mean that the Navy, armed with some future variant, will have the capability to order unmanned sorties from carrier groups anywhere in the world within hours of a clash.

    The X-47B is also a big step forward in robotic flight. The U.S. military has roughly 10,000 unmanned aerial vehicles (UAVs), which ply the skies above places like Afghanistan, Pakistan, Yemen and, sometimes, the U.S. Engineers call such aircraft man-in-the-loop systems, and humans typically control them remotely, whether from a ground base nearby or a command post a continent away. The X-47B is a man-on-the-loop system: While people retain control over the general mission, the moment-to-moment decisions are left to the aircraft's robot brain.

    Outside of flight, man-on-the-loop systems are becoming increasingly common. Scientists have been using autonomous probes to map the ocean floor for the past decade. The U.S. Department of Energy recently deployed autonomous ground vehicles to patrol the Nevada National Security Site, a former proving ground for nuclear weapons. And farmers are starting to use self-driving tractors to till fields and harvest crops. What sets the X-47B apart from those systems is the nature of its environment. Rather than a deserted waste site or an empty field, the X-47B is designed to operate on and around an active aircraft carrier.

    After five years of development, engineers at Northrop Grumman and within the Navy's Unmanned Combat Air Systems (UCAS) group have created a robot brain capable of operating in such a complex setting. It can process vast amounts of flight data, make near-instantaneous decisions and guide an aircraft to a flawless, squealing halt on the deck of a carrier. Now the designers face a different kind of challenge: training the aircraft to work with people.



    Robotic autonomy is fundamentally different from automation. Automated systems perform repetitive, preprogrammed tasks, and they have played a role in flight for decades. The Navy has employed a hands-off radar-based system to automatically recover F/A-18 fighter jets since the early 1990s. Autonomy connotes self-governance. It implies the ability to assess fluid situations and form dynamic responses. Some modern autopilot systems possess autonomous features-they can adjust throttle to optimize airspeed or move fuel between tanks to balance the aircraft's weight without human permission-but humans still act as a backstop, sitting inches from the controls./>

    For the Army and Air Force, launching automated or semiautomated UAVs from ground bases into uncontested airspace has become instrumental to military operations. But on Naval carrier missions, they are all but useless. The common Predator and Reaper drones are too large and slow to take off from a carrier deck. They are also too bulky to operate as stealth craft and too cumbersome to dodge surface-to-air missiles or cannon fire, which means they can't fly in contested space. And even if they could be made smaller, faster and nimbler, landing one on a carrier via joystick and video feed would be all but impossible. />

    When the Navy awarded Northrop Grumman the development contract for the X-47B in 2007, it had three main requirements: The aircraft must be carrier-suitable, it should be able to evade enemy radar, and it should be autonomous, not simply automated. The team already had a stealth frame designed, a blended-wing configuration known as a cranked kite. It lacked sharp surface features that might return strong radar signatures. As they adapted it, they kept it small, just 62.1 feet across, and with wings that folded up so the finished aircraft would store easily on a hangar deck. They also managed to build in a Pratt & Whitney F100 jet engine-the same one that powers some F-15 and F-16 fighter jets-making the aircraft faster and more powerful than the propeller-driven Reaper or Predator drones.


    Click here to see how the X-47B will land

    With the X-47B's basic design in place, engineers began to develop the aircraft's senses. They loaded it with GPS equipment, accelerometers, altimeters, gyroscopes and other classified hardware, all aimed at providing the flight-control computer with the information necessary to sustain autonomous flight. They also developed a high-speed data link capable of swapping digital information with a ground station or aircraft carrier across at least 50 nautical miles.

    As one group of engineers worked on the hardware, another built a highly sophisticated autopilot system controlled by a layer of artificial intelligence. The software would translate the sensor data into decisions and commands for the flight computer. To train the X-47B, they ran its software though tens of thousands of virtual missions, pitting it against a range of simulated conditions and refining its code with every trial.

    In July 2010 the team at Northrop's manufacturing facility at Plant 42 in Palmdale, California, loaded the X-47B onto a trailer and towed it up the road to Edwards Air Force Base, where it would make its maiden flight the following February. Beneath a blanket of thin, high clouds, a mix of Northrop staff and Navy personnel watched as the craft screamed down the runway, lifted off and made a cautious 29-minute flight, circling the base at 5,000 feet while downlinking data to researchers on the ground. Engineers had planned to make 50 such flights to test the limits of the X-47B but it performed so well and so consistently, they stopped after just 16 trials. The next step was to get it ready for the carrier./>

    * * *

    Engineers need to fold the X-47B into a fluid human system without disrupting it.When a pilot approaches an aircraft carrier, he is entering one of the most complex and least forgiving environments on Earth. Operations occur at hundreds of miles an hour, with a variable number of pilots, planes and deck personnel working on a bucking, wind-blown carrier deck. After a pilot radios his intention to land, air-traffic controllers either clear him for approach or direct him into a holding pattern. They also supply the pilot with weather and deck conditions. On the approach, the pilot typically relies on the landing signal officer (LSO) to guide him using light signals and visual cues. The air boss, an officer in the primary flight control tower, or PriFly, oversees the operation as well. Seconds before touchdown, the LSO makes a final landing determination, waving off the pilot for another try if the glide slope or course looks risky.

    The process of landing planes on a carrier deck, called recovery, has not changed significantly since World War II, nor will it in the near future. The challenge, then, is how to fold the X-47B into a highly fluid human system without disrupting it. Engineers approached the problem in a few different ways. First, they automated much of the chatter that goes on between pilots and air-traffic controllers. Instead of verbally reporting fuel levels or altitude readings to air-traffic control, the aircraft beams that data directly over its link to the tower. Rather than relying on a verbal description of conditions, it downloads the carrier's position, speed and pitch from sensors on the ship 100 times a second and adjusts its path to match.

    Where direct communication between aircraft and human is unavoidable, designers translated verbal commands into a digital language. They started with the 100-plus-page carrier operations manual and boiled it down to 53 critical commands. Many involve taxi and takeoff, along with flight checks and other safety routines. Engineers then built a software interface that displays the commands. Working through the interface in the PriFly, air bosses can issue the same orders to the X-47B that they might to a pilot. The LSOs got a new tool, too. Designers updated the handheld device known as the "pickle," which LSOs use to grant or deny final landing clearance, so that it can communicate directly with the X-47B.

    The team also determined what would happen if communication were to break down. If the data link failed on approach, or the LSO waved off the X-47B from final landing, the craft would fly past the carrier and clear of other aircraft and settle into a wide loop that would bring it back around for another approach. If communication were irreparably severed, it would search for a terrestrial landing spot or, as a last resort, ditch into the ocean.

    By the time the UCAS group developed the basic communication software and interfaces, the X-47B could technically have made a carrier landing. Even on a heaving carrier deck, researchers predicted, the margin of error during a landing would be within a few feet. The question was not whether the X-47B could work with people, but whether people could work with it.

    In December 2011 the Navy shipped the X-47B to Patuxent River, known as Pax River. The test facility is one of two in the world with a mock carrier deck, outfitted with a steam catapult and arresting cables. Engineers also built simulation rooms for working out software bugs and training carrier personnel. One is a replica of the air-traffic control center, complete with radar screens and communication equipment. The other is a re-creation of the PriFly, where four flat screens display the same view an air boss might see./>

    Until its first real carrier flight early next year, engineers at Pax River will run the X-47B through tests on the mock carrier deck, practicing catapults and arrested stops under a variety of conditions and with carrier personnel. At the same time, air-traffic controllers, air bosses and LSOs will run through virtual takeoff and landing scenarios in the simulation rooms, building the experience, confidence and trust that they will need for a successful operation. After completing the carrier landing, the X-47B will return to Pax River to train for the next milestone: an autonomous midflight refueling, which is scheduled for 2014.

    * * *

    At the moment, the X-47B program is scheduled to end sometime after its successful autonomous midflight refueling. It's uncertain what will happen then. The Navy will not discuss plans for the prototype other than to say it will never see active duty. Each of its two weapons bays could carry a 2,000-pound bomb, but neither ever will; one is currently filled with data-gathering instruments and the other is empty. And the X-47B still has room for improvements. It cannot (yet) perceive hand signals or other visual cues, so humans need to control it directly during taxi and deck operations.

    Yet even if the X-47B never develops beyond a technology demonstrator, the system that governs it could have a lasting impact on flight. Designers will almost certainly integrate something like it into future military aircraft and perhaps into commercial aircraft as well. In February Congress approved a four-year, $63-billion budget to implement the NextGen program, a plan from the Federal Aviation Administration to upgrade and digitize America's national airspace, much as the Navy is digitizing the airspace around its aircraft carriers. In NextGen, engineers would replace radar with GPS. Planes would communicate over a data link with towers and other aircraft, both manned and unmanned (the budget includes a mandate to integrate civilian drones into the national airspace by 2015). The NextGen system could allow pilots to choose more-direct flight paths between destinations, reducing flight times and increasing efficiency.

    Engineers have already proved that the X-47B's autonomous system can pilot a conventional manned aircraft. Last summer, they plugged it into the avionics of an F/A-18 fighter jet. On July 2, the jet made 36 approaches, 16 touch-and-go landings and six full arrested landings on the USS Eisenhower. During the tests, a very trusting pilot remained in the cockpit as a precaution, but he never once touched the stick.

    Clay Dillow is contributing writer at Popsci.com. His most recent story for Popular Science, in June, was about a more efficient helicopter engine.



    Video: NASA's Experimental Morpheus Lander Fails Flight Test by Exploding

    0
    0
    Morpheus, Testing its Design Limitations NASA
    Literally

    Space exploration doesn't always go smoothly. For instance, the triumph of Apollo 11 was followed by the failed mission and near-disaster of Apollo 13. Prior to launching Alan Shepard into space in 1961, NASA blew countless space rockets to pieces on the launchpad. Russia still crashes its spaceships periodically. And lest last week's euphoria over the Mars rover Curiosity landing have you thinking NASA's got this spaceflight thing down to a pure science, please see the video below. Late last week, NASA's experimental unmanned Morpheus lander failed spectacularly during vehicle tests. Really spectacularly. With fire and explosions and whatnot.

    Morpheus is designed for lunar exploration, though NASA acknowledges that it could--with some modifications--be reconfigured for asteroid exploration as well. It is powered by liquid oxygen and methane propellant jets that allow it to hover and make soft landings, which would allow it to deliver more than 1,000 pounds safely to the moon's surface.

    And maybe some future Morpheus lander will do exactly that. The one making its first untethered free-flight in the video below, as you will see, is now toast. But while the test vehicle smolders, the program itself is still very much alive, if set back by 20 flight tests and many, many months. NASA and its partner Armadillo Aerospace have already begun work on a second vehicle that could be ready for tests early next year.

    [SPACE]



    Electronic 'Smart Fingertips' Could Give Robots and Doctors Virtual Touch

    0
    0
    Electronic Finger Electronic fingertips could give robots a sense of touch, or allow doctors to operate via virtual touch. John Rogers/University of Illinois

    The same touchy engineers who gave us the first peelable epidermal electronics last year have a new virtual tactile system: Smart fingers, which could someday bring a real sense of touch to telepresence applications. Surgical robots or human doctors could virtually feel surfaces, temperatures and other characteristics, through special smart gloves designed to trick the brain into thinking it's feeling.

    Materials scientist John Rogers and colleagues at the University of Illinois at Urbana-Champaign designed flexible, skin-molded fingertip sensors made of super-thin silicon sheets. The gold conductive lines form a circuit, and the entire sensor is embedded in a flexible polymer material called polyimide. This is transferred to a thin silicone mould, which can be fitted to a fingertip shape. The sensor provides tactile feedback by forming electric currents when you press something. The currents are transmitted to your actual skin.

    In tests, Rogers and colleagues donned the electronic fingertips and started touching flat objects. The resulting currents translated to a slight tingling sensation in their flesh-based fingertips. This is a step toward creating electrical signals that could be interpreted by the nervous system, according to ScienceNOW -- eventually, electronic fingers could create patterns of signals that can recreate sensations, from heat to texture.

    The fingertips could also include separate sensors to detect things like motion or vibration. Rogers says in a news release that they could be used as medical devices -- perhaps for ultrasound imaging, or even as an ablation device, burning away problem tissue or creating sutures. "Imagine the ability to sense the electrical properties of tissue, and then locally remove that tissue, precisely by local ablation, all via the fingertips using smart surgical gloves," he said.

    The main breakthrough here is the fingertips' stretchability and flexibility. Plenty of haptic feedback devices exist, but they're largely bulky, or flat, or otherwise unable to morph very well to the soft geometries of the body. This system is very similar to the first peelable, temporary tattoo-like skin electronics Rogers and colleagues developed last year. In that system, morphable electrodes are pasted onto the skin with water, hence the comparison to temporary tattoos. They could be used as health monitoring devices or machine-human interfaces, among other uses.

    These fingertip sensors could conceivably be designed to fit any body part in need of some tactile feedback, Rogers and colleagues say -- like perhaps the heart, where it could sense muscle contraction or get things pumping properly. The research appears in the journal Nanotechnology.

    [via KurzweilAI]



    Video: Indestructible Military Inchworm-Bot Survives Attack By Bootheels and Hammers

    0
    0
    Meshbot Meshbot is inspired by earthworms. MIT News via YouTube

    Soft, bendy robots could have a wide variety of benefits, from squishing into tight spaces to conduct surveillance, to crawling through a person's body to deliver drugs or take medical images. But it's hard to build entirely soft objects containing soft bodies, soft batteries and soft motors. A new version developed at MIT and Harvard is both soft and tough, inching around like an earthworm yet surviving multiple cruel blows from a rubber mallet.

    The robot is nicknamed "Meshworm," after the earthworms that inspired its design. Rather than using liquid, air or silicone gears to get around, like other soft robots we've seen, Meshbot uses artificial muscle made from a shape-memory alloy. It looks very much like a squiggling worm, as you can see in the video below.

    Researchers led by MIT mechanical engineering professor Sangbae Kim took a flexible mesh tube and encircled it in wires out of titanium and nickel, an alloy that contracts and expands with heat. They separated the wire into segments, much like those found in an earthworm, and applied a current to heat some of the segments. This made Meshworm's soft body squeeze together sections at a time, which caused it to inch along a surface. This type of locomotion is called peristalsis, and it's the same action used by snails, cucumbers and our own gastrointestinal tracts (to move food into our stomachs).

    The team examined earthworm body structure to come up with this idea, and found that earthworms use latitudinal and longitudinal muscle groups to inch themselves along. The mesh tube represents the longitudinal section, and the shape-memory alloy represents the horizontal muscle group.

    The best part may be the robot's durability. There are no pneumatic pumps, rigid gears or batteries to break, so the robot can survive all kinds of assaults. This could make it useful for military applications - DARPA funded this research. Kim and colleagues subjected the robot to a battery of tests to see how it held up, including smashing it with a mallet and stepping on it.

    "You can throw it, and it won't collapse," Kim says. "Most mechanical parts are rigid and fragile at small scale, but the parts in Meshworms are all fibrous and flexible. The muscles are soft, and the body is soft … we're starting to show some body-morphing capability."

    The team recently published details of the design in the journal IEEE/ASME Transactions on Mechatronics.

    [via MIT News]




    Researchers Reach the 'Highest Possible' Resolution for Color Laser Printing at 100,000 dpi

    0
    0
    The Lovely Lena Söderberg Resolved at 100,000 dpi via Nature

    A lot has been written about the perceived benefits and non-benefits of higher-than-the-human-eye-can-perceive resolutions, things like displays that go beyond HD and retina or cameras and scanners that capture imagery in pixel counts that go so far beyond the threshold of what we can see as to be meaningless, at least visually speaking. Undaunted, researchers in Singapore claim they have achieved the highest resolution possible for color laser printing by recreating the classic Lena test image at 100,000dpi.

    Why is this the upper limit of resolution? According to the researchers, if the individual pixels where any smaller light would simply bounce off them and diffract, blurring the picture rather than sharpening it. And these aren't your average pixels either. Each is a tiny gold or silver nanodisk fixed to a tiny pillar. Color is conjured by adjusting each disks diameter and the spacing between it and its neighbors, creating an effect called plasmon resonance that is perceived by the eye as different shades.

    Or so we're told. The abstract presenting this research is full of interesting things you probably didn't know about light, color, and nano-science. Read on.

    [Nature via The Verge]



    Spotting Cancer Cells in Blood With a 27-Picosecond Camera

    0
    0
    The Busy Human Bloodstream National Cancer Institute

    A simple blood test that offers early detection of cancer in the human body has long eluded medical researchers, but a team at UCLA is getting closer. By blending an ultra-fast camera and a powerful optical microscope with software that can process the data they produce at extremely high speeds, the team hopes it can spot circulating tumor cells (CTCs) that have broken away from cancerous tumors in blood samples, potentially making early cancer detection as simple as taking a blood draw.

    When a cancerous tumor forms in the body, cells break away and get loose in the bloodstream. These are the aforementioned CTCs and they are a very real threat, helping cancer to metastasize throughout the body. But they also serve as a telltale sign of the presence of cancer in the first place--evidence right there in a person's blood that tells doctors that cancer is hiding somewhere in the body.

    Detecting those CTCs is challenging, however. Blood samples contain a range of cells--red blood cells and white blood cells of course, but also platelets and other particulates--and CTCs are few among the many. In other words, there's a lot of noise in a blood sample if you are strictly looking for CTCs.

    The UCLA team is trying to overcome this noise and zero in on the CTC signal by using a super-fast microscopic camera that it devised some years back, one capable of recording imagery at six million frames per second. Called STEAM (for serial time-encoded amplified microscopy), the technology uses extremely short laser pulses (one billionth of one second long) to image blood samples. That's like having a shutter speed of just 27 picoseconds, or 27 trillionths of a second.

    Such lightning-fast imaging requires extremely fast processing, so the UCLA team's STEAM camera feeds the data from the camera into a computer that can quickly assemble images and identify the poorly-defined shape of cancer cells against a backdrop of well-defined normal blood cells. They are currently clinically testing the process against lung, stomach, breast, prostate, and intestinal cancers, and hope to soon add ovarian and pancreatic cancers to that list (the latter two are fast-spreading cancers, so early detection is even more important in these cases).

    If their clinical tests pan out, the scientists may just have a quick and (mostly) painless noninvasive diagnostic that can quickly screen for various cancers during a routine doctor's visit, upping the rate of early detection and doubtlessly saving lives.

    [Scientific American]



    New Algorithm Predicts Your Future Movements Within 65-Foot Accuracy

    0
    0
    App's average error is just the distance between home plate and the pitcher's mound

    Plenty of hay has been made over which apps and cell phones track our movements, but so far it has been difficult to accurately determine where we're going next -- people can be unpredictable, after all, and make dinner plans at random new places on a whim. In that case, what's a prediction algorithm to do? Track all your friends, too, it turns out.

    A team of British researchers developed a new algorithm that can predict where you'll be within 24 hours, with 20-meter (about 65-foot) accuracy. This is a major improvement over other attempts to predict future movements, which have been based in cell phones and even in some cars. These systems track your location over time to determine patterns and habits throughout the days and weeks, figuring out likely destinations at certain periods. But what about when the pattern is disrupted?

    Mirco Musolesi, Manlio Domenico, and Antonio Lima of the University of Birmingham combined individual tracking data with data from everyone in a subject's phone book. Their algorithm finds correlations between a user's phone and the movements of contacts in that phone's contact list, and can make an educated guess about where a user is going -- even if it's a major deviation from that person's normal routine. Even if you change your typical path by huge margins, the algorithm's error rate is only about 65 feet, less than an average city block.

    The algorithm won Nokia's Mobile Data Challenge, but it could have some real-world implications, too. Location-based services could use it to predict where you might eat lunch tomorrow, for instance, and send you coupons for restaurants in the likely vicinity. This could be attractive for businesses that would love to take advantage of your spontaneous activity, along with your humdrum daily routines.

    Or in a more ominous scenario, authorities could use location-prediction to predict future crimes and where they might occur, something that's already being done on city-wide scales in places like Santa Cruz, Calif. The main obstacle would be privacy -- but as Slate points out, plenty of location-service users have no problem giving away where they are, at all times. Just look at services like Foursquare, Yelp or even Facebook. People like sharing where they are and where they plan to go -- so it's not a far leap for advertisers, or others, to start following along with those plans.

    [via Slate]



    How to Teach a Robot to Improvise

    0
    0
    Evasive Maneuvers Click here to see this amazing image even larger. Graham Murdoch
    Drones are learning the difference between a car and a tree--and how to make their next moves

    Self-piloted drones have become sophisticated enough to land on moving aircraft carriers, but put a single unexpected tree in the way, and they will crash. Now a five-university group that includes specialists in biology, computer vision and robotics is trying to teach drones to dodge obstacles on the fly. Working with $7.5 million from the Office of Naval Research, the scientists aim to build an autonomous, fixed-wing surveillance drone that can navigate through an unfamiliar city or forest at 35 miles an hour.

    The group's inspiration is the pigeon. Hardy, plentiful and receptive to training, the birds are easy to study. In flight, they estimate the distance between themselves and objects ahead by quickly processing blurry, low-resolution images, just as a drone will need to do. And, crucially, they have a tendency to make decisions at the last moment-within five feet of an obstacle.

    The first step is to teach robots to differentiate between obstacles and empty space. Engineers have already figured out how to train point-and-shoot cameras to spot faces in a photo: In a process called supervised learning, a technician feeds millions of images into a computer and tells it to output a "1" when the image contains a human face and a "0" when it does not. But this style of supervised learning would be an impossibly labor-intensive way to train a drone. A human would have to label not just faces but every possible object the robot might encounter. Instead, Yann LeCun, a professor of computer and neural science at New York University who leads the drone's vision team, is developing software that will allow the drone to draw conclusions about what it's seeing with much less human coaching. By mimicking the hyperefficient parallel processing method that the brain's visual cortex uses to classify objects, the software enables features from the raw video frame to be extracted much more quickly. As a result, the drone's human instructors need to show it only a few hundred to a few thousand examples of each category of object ("car," "tree," "grass") before it can begin to classify those objects on its own.

    Step one is to teach robots to differentiate between obstacles and empty space.Once the researchers have taught the drone to see, they will need to teach it to make decisions. That involves grappling with the inherent ambiguity of visual data-with deciding whether that pattern of pixels ahead is a tree branch or a shadow. Drew Bagnell and Martial Hebert, roboticists at Carnegie Mellon University, are developing algorithms that will help the robot deal with visual ambiguity the way humans do: by making educated guesses. "They can say, ‘I'm 99 percent sure there's a tree between 12 meters and 13 meters away,' and make a decision anyway," Bagnell says.

    It will take a lot of computing power to make those decisions. The drone will have to process 30 images per second while contemplating its next move. LeCun says that a processor that can run his algorithms at a trillion operations per second would do the job, but the challenge is to build all that power into a computer light and efficient enough to fly. The best candidate is a processor that LeCun developed with Eugenio Culurciello of Purdue University: a low-power computer the size of a DVD case called NeuFlow, which LeCun is confident he'll be able to speed up to a trillion operations per second by the group's 2015 deadline.

    Once they've built a robot that can learn, see and make decisions fast enough to avoid obstacles, they still have to teach it to fly. Russ Tedrake, an MIT roboticist, is already using motion-capture cameras and a full-scale prototype of the final drone to model the maneuvers it will need to perform. If the team succeeds, the result will be a robot that can descend into a forest and lose today's drones in the trees.

    FILTERING THE WORLD

    As the drone flies, its onboard camera will feed video to software that applies a series of filters to each frame. The first filters pick up patterns among small groups of pixels that indicate simple features, like edges. Next, another series of filters looks for larger patterns, building upward from individual pixels to objects to complex visual scenes. Within hundredths of a second, the software builds a low-resolution map of the scene ahead. Finally, it will compare the objects in view to ones it has "seen" before, classifying them as soon as it has enough information to make an educated guess.

    Andrew Rosenblum wrote in the April issue about trucks that fight jet-fuel fires. He lives in Oakland, California.



    Researchers Smash Quantum Teleportation Distance Record, Beaming a Photon Over 89 Miles

    0
    0
    The European Southern Observatory's Optical Ground Station at Tenerife Was Used in the Experiments H. Raab via Wikimedia

    The Olympics are over but international competition is still hot and records continue to fall. Just eight short days after a Chinese physics group posted a paper claiming to have achieved quantum teleportation across a record-setting 97 kilometers (just more than 60 miles), a joint Canadian/European team posted another claiming to have teleported a single photon across 143 kilometers (nearly 89 miles). That second paper hasn't been peer-reviewed yet (the Chinese paper just came out in the peer-reviewed journal Nature on August 9), but should it stand up to scrutiny, we've got yet another new distance record in the quantum teleportation event. Considering that just two years ago ten miles was the high bar, this is a space to watch.

    [SciAm]



    Interactive Infographic: How Do Olympic Gold Medalists Compare?

    0
    0
    See how these Games have stacked up to others years, and how players have used technology and strategy to take gold

    Now that the Olympics are done, we can reflect on the big moments. (Usain Bolt's lightspeed 100-meter win and Michael Phelps's sunken-pirate-ship levels of gold come to mind.) But if we pull the historical camera back even farther, we can look at the big picture, seeing exactly how much of a blip on the timeline this year made. With that in mind, we've created an interactive graph that shows every gold-medal time for several events and annotations for years that were outliers, or that were just especially interesting (including tech like the Speedo LZR suit, or less-known developments like the official roughening of the javelin to handicap the competitors). It's a look at how technology, smarts, and super-human ability brought the Games to where they are now.

    Hover over the bars to see the winning times for each year. To highlight an annotated year, hover over the text below each graph.



    Mutant Butterflies are Turning Up in Japan's Nuclear Disaster Zone

    0
    0
    Mutant Butterflies Something doesn't look right. via Nature

    The first serious indications of the ecosystem impact of the Fukushima nuclear disaster in Japan are in, and they're troubling. Researchers there collected 144 common pale grass blue butterflies from the region a couple of months after the catastrophic nuclear meltdowns leaked radiation into the environment last year. After studying them for a few generations, those researchers are finding signs of genetic mutations that are leading to physical abnormalities.

    Those abnormalities include smaller-sized wings, disfigured antennas, strange indentations in the eyes, and shifts in wing color patterns. Initially, only 12 percent of the butterflies sampled showed such mutations. But samples collected six months later showed abnormalities on the rise--28 percent of this second group exhibited abnormalities while 52 percent of their offspring expressed mutations, indicating that the genetic issues arising from the radiation exposure are manifesting themselves further generation after generation.

    That's not good news, though it's no reason to panic either, the researchers say. Butterflies are good barometers for this kind of thing, because they exist nearly everywhere and their life cycles are relatively rapid, allowing researchers to observe many generations in a short span. But how radiation exposure will affect other species--like livestock, fish, or humans--has yet to be seen. Species' sensitivities to radiation vary, so there's no reason to assume these kinds of mutations will apply to humans in the area.

    Radiation spilled into the region after the March 2011 9.0-magnitude Tohoku earthquake knocked backup generators offline at a local nuclear power station. The inability to cool those reactors in the immediate aftermath of the earthquake caused meltdowns at three of the station's reactors.

    [MSNBC]




    The Ten Weirdest-Named Shark Species

    0
    0
    Bowmouth Guitarfish The bowmouth guitarfish is actually a ray, though due to its shark-like body is sometimes called a shark ray. It's just about the coolest-looking shark there is (science). It's found all over the tropical sections of the Pacific and Indian Oceans, and survives well in captivity--but is relatively rare, due to accidental catch, fishing (its fins are eaten in some Asian countries), and habitat destruction. Wikimedia Commons

    Happy Shark Week! In honor of the One True American Holiday (all other holidays are less true, due to lower shark content), I spent about an hour reading about sharks on Wikipedia. Important findings from a solid morning's research: sharks often have weird names. For example: the birdbeak dogfish. That's a real animal! Ditto the flaccid catshark and, perhaps weirdest of all, the porbeagle, which doesn't sound like a fish at all. Click through for more.


    Click to launch the gallery.



    Video: Disney's 3-D Facial Scruff Technology Reconstructs Beards Down to the Individual Hair

    0
    0
    Disney's 3-D Facial Hair Reconstruction Technology via Disney Research

    Face capture technology has come a long way, especially as 3-D stereoscopic imaging and the like have made leaps forward in recent years. It's now relatively easy to capture a face in 3-D and reconstruct it digitally for applications such as the amazing CGI you see in movies like The Avengers (Ruffalo-Hulk was pretty visually awesome, no?). But facial hair is another story altogether. Current face capture systems don't capture it well, and the skin that it obscures on the face then becomes an issue as well. Disney Research is changing that.

    Facial hair "plays an important role in individual expression," Disney researchers write in the abstract for its paper titled Coupled 3D Reconstruction of Sparse Facial Hair and Skin (the full paper in PDF is here). In it, they describe an approach to facial reconstruction that captures facial hair and facial skin together, in a coupled fashion that recognizes individual hairs on the face and digitally traces them as such (this doesn't strictly apply to men's beards and mustaches, but also to eyebrows, sideburns, etc.). From that data, the algorithms can also determine where the hair is obscuring facial skin in the imagery so underlying skin surface can be reconstructed accurately.

    You can see how this works in the video below. By using a coupled algorithm that detects and records skin and hair data at the same time (but can also differentiate between them) the researchers demonstrate the ability to reconstruct a person's facial stylings--be it patchy stubble or a full beard--down to the hair.

    [Disney Research]



    With New Programming, Autonomous Airplanes Can Navigate on the Fly With No GPS

    0
    0
    Self-Navigating Drone MIT

    Self-piloted drones may be able to land or fly almost anywhere -- even aircraft carriers -- but they need some complex navigation skills to do it, including the somewhat existential ability to know where they are in the world. But this is difficult without some type of onboard relative positioning system. A new algorithmic project at MIT straps netbook computer parts to a specially designed, laser-equipped airplane that can find itself and navigate tight spaces safely.

    First, the researchers needed a plane that could carry all the computers and electronics it would need to fly, yet still handle tight spaces and low speeds with aerodynamic ease. MIT professor Mark Drela came up with this unusual short-winged model, which buys the aircraft slightly more response time in tight spaces. The Robust Robotics Group, led by associate aeronautics and astronautcs professor Nick Roy, used two types of algorithms to help their airplane locate itself in real time.

    It was pre-loaded with a map of its environment, but it's still enormously challenging to write programs that can determine where the plane is on this map. The aircraft uses a laser rangefinder, accelerometers, gyroscopes and other sensors to figure out its orientation, speed and other properties, which means lots of computation is required. To figure out its state of being at any given moment, the plane has to calculate 15 different values, according to MIT News. A combination of two state-estimation algorithms figures this out on the fly, allowing the plane to dodge obstacles it only just figured out it was approaching.

    Using these algorithms, the plane recently flew through the parking garage under MIT's Seuss-y Stata Center, covering more than three miles at 22 MPH. It successfully dodged support beams, cars and vans, and the low ceilings in the garage.

    It's a complex problem in many ways, not the least of which is a fixed-wing airplane's inherent slower response time, according to Roy. "It's going much faster, and it can't do arbitrary motions," Roy says. "They can't go sideways, they can't hover, they have a stall speed."

    The next step is to endow the plane with the ability to create its own map on the fly, in real time -- a much bigger challenge.

    [MIT]



    Museum Relics Prove Pacific Sharks Died Out Before We Knew They Were There

    0
    0
    Dusky Shark Wikimedia Commons

    Dusky sharks do not live in the Pacific waters near the Republic of Kiribati. Neither do spottail sharks, nor the aptly named bignose sharks. But they used to live there at one point in the past -- right by the Gilbert Islands, according to anthropological evidence. Ancient shark-tooth weapons can serve as a record of past biodiversity, according to new ecological research.

    The sharks disappeared before biologists ever knew they were there, according to Columbia University conservation biologist Joshua Drew, who presented his findings at the Ecological Society of America's annual meeting. Drew examined weapons from the Gilbert Islands that are owned by the Field Museum in Chicago, according to Nature.

    The collection of swords, tridents and other serrated implements are made of shark teeth twined together, and some date to 120 years ago. Because biologists can identify sharks by their teeth, the weapons can prove which sharks were found in the area where the weapons were made, according to Drew.

    He identified 19 distinct species, including three that are no longer found within hundreds or even thousands of miles of the Gilbert Islands. It's unlikely that the teeth in question came from some other outpost, Nature says -- there's no archaeological or anthropological evidence of trade among the Gilbert Islanders and other people who live where those sharks are found today.

    It's not exactly clear how these sharks disappeared, but fishing and shark-finning are likely culprits, Nature reports. The weapons may not be able to shed light on those causes. But they could be a useful data bank for establishing species' presence, if not their numbers, before modern human-related ocean changes began.

    [Nature News]



    Strange, Unexplained Solar Influence Over Earth's Radioactive Material Could Herald Solar Flares

    0
    0
    Gigantic Solar Flare On August 9, 2011 at 3:48 a.m. EDT, the sun emitted an Earth-directed X6.9 flare, as measured by the NOAA GOES satellite. These gigantic bursts of radiation can disrupt the atmosphere and interfere with GPS and communications signals. New research shows that strange solar interactions with radioactive particles on Earth could be used as an early warning system for flares like this one. NASA

    A mystifying trick of the sun, inappropriately interfering with particles on Earth, could be used as an early-warning system for solar flares, a new study says. With enough warning, satellites, telecommunications infrastructure and even orbiting astronauts can take cover from our star's worst radioactive eruptions.

    This all goes back to 2006, when physicists at Purdue, Stanford and other places noticed something that at first defied physical explanation: Radioactive elements were changing their decay rates. This flew in the face of long-accepted physics theory, which held that these rates are constant. Radioactive decay apparently grew more pronounced in winter than in summer, and when scientists went looking for an explanation, they noticed this appeared to correlate with solar flares.

    Last year, we learned from Purdue physicist Ephraim Fischbach that this kept happening. He noticed a change in the radioactive decay rate of a manganese isotope, and also tied it to a solar flare that happened a night before. So that meant something came out of the sun, went through the Earth, hit a piece of manganese-54 and changed the rate at which it decays into chromium-54, spewing out ionizing particles. This also happened to an isotope called chlorine-36, in different experiments at different labs. The unusual decay change has happened during 10 solar flares since 2006, and the song remains the same.

    "We have repeatedly seen a precursor signal preceding a solar flare," Fischbach says in a new news release. "We think this has predictive value."

    That's good news -- an imminent-solar-eruptor-detector could help astronauts on the International Space Station or en route to Mars take cover, or it could notify people on Earth that they should shut down power plants and communications infrastructure to guard against another Carrington event. That gigantic solar flare and coronal mass ejection, back in 1859, caused telegraph wires to glow and the aurora borealis to appear as far south as Cuba.

    The potential solar sensor would consist of a chunk of manganese-54 and a gamma-radiation detector, which would record the manganese's rate of decay into chromium-54. If it changes, you'd know to take cover. Purdue filed a patent application for the concept.

    The bad news is that still, no one knows why this happens. It may be an interaction among ionizing particles and neutrinos, but those things are puny, chargeless and mostly unwilling to interact with any normal matter. It's still a mystery, said Fischbach's colleague at Purdue, nuclear engineer Jere Jenkins. "We are saying something that doesn't interact with anything is changing something that can't be changed. Either neutrinos are affecting decay rate or perhaps an unknown particle is," Jenkins says.

    Research describing the detector appears in the journal Astroparticle Physics.

    [via Futurity]



    Viewing all 20161 articles
    Browse latest View live




    Latest Images