Police body-cam maker Axon says no to facial recognition, for now

Facial recognition is a controversial enough topic without bringing in everyday policing and the body cameras many (but not enough) officers wear these days. But Axon, which makes many of those cameras, solicited advice on the topic from and independent research board, and in accordance with its findings has opted not to use facial recognition for the time being.

The company, formerly known as Taser, established its “AI and Policing Technology Ethics Board” last year, and the group of 11 experts from a variety of fields just issued their first report, largely focused (by their own initiative) on the threat of facial recognition.

The advice they give is unequivocal: don’t use it — now or perhaps ever.

More specifically, their findings are as follows:

  • Facial recognition simply isn’t good enough right now for it to be used ethically.
  • Don’t talk about “accuracy,” talk about specific false negatives and positives, since those are more revealing and relevant.
  • Any facial recognition model that is used shouldn’t be overly customizable, or it will open up the possibility of abuse.
  • Any application of facial recognition should only be initiated with the consent and input of those it will affect.
  • Until there is strong evidence that these programs provide real benefits, there should be no discussion of use.
  • Facial recognition technologies do not exist, nor will they be used, in a political or ethical vacuum, so consider the real world when developing or deploying them.

The full report may be read here; there’s quite a bit of housekeeping and internal business, but the relevant part starts on page 24. Each of the above bullet points gets a couple pages of explanation and examples.

Axon, for its part, writes that it is quite in agreement: “The first board report provides us with thoughtful and actionable recommendations regarding face recognition technology that we, as a company, agree with… Consistent with the board’s recommendation, Axon will not be commercializing face matching products on our body cameras at this time.”

Not that they won’t be looking into it. The idea, I suppose, is that the technology will never be good enough to provide the desired benefits if no one is advancing the science that underpins it. The report doesn’t object except to advise the company that it adhere to the evolving best practices of the AI research community to make sure its work is free from biases and systematic flaws.

One interesting point that isn’t always brought up is the difference between face recognition and face matching. Although the former is the colloquial catch-all term for what we think of as being potentially invasive, biased, and so on, in the terminology here it is different from the latter.

Face recognition, or detection, is just finding the features that make up a face in the picture — this can be used by a smartphone to focus its camera or apply an effect, for instance. Face matching is taking the features of the detected face and comparing it to a database in order to match it to one on file — that could be to unlock your phone using Face ID, but it could also be the FBI comparing everyone entering an airport to the most wanted list.

Axon uses face recognition and tracking to process the many, many hours of video that police departments full of body cams produce. When that video is needed as evidence, faces other than the people directly involved may need to be blurred out, and you can’t do that unless you know where the faces are. (Update: This paragraph originally stated that Axon was using a “lesser form of face matching,” which matches faces within videos but not with any central database, that it calls face re-identification. In fact this technology is not currently deployed commercially and is only in the research phase.)

That particular form of the technology seems benign in its current form, and no doubt there are plenty of other applications that it would be hard to disagree with. But as facial recognition techniques grow more mainstream it will be good to have advisory boards like this one keeping the companies that use them honest.

NASA’s Dragonfly will fly across the surface of Titan, Saturn’s ocean moon

NASA has just announced its next big interplanetary mission: Dragonfly, which will deliver a Mars Rover-sized flying vehicle to the surface of Titan, a moon of Saturn with tantalizing life-supporting qualities. The craft will fly from place to place, sampling the delicious organic surface materials and sending high-resolution pictures back to Earth.

Dragonfly will launch in 2026, taking eight years to reach Titan and land (if all goes well) in 2034. So there will be plenty more updates after this one!

The craft will parachute through Titan’s hazy atmosphere and land among its dune-filled equatorial region. It’s equipped with drills and probes to investigate the surface, and of course cameras to capture interesting features and the surrounding alien landscape, flying from place to place using a set of rotors like a drone’s.

We’ve observed Titan from above via the Cassini mission, and we’ve even touched down on its surface briefly with the Huygens probe — which for all we know is still sitting there. But this will be a much more in-depth look at this fascinating moon.

Titan is a weird place. With rivers, oceans, and abundant organic materials on the surface, it’s very like Earth in some ways — but you wouldn’t want to live there. The rivers are liquid methane, for one thing, and if you’re familiar with methane, you’ll know that means it’s really cold there.

dragonfly gifNevertheless, Titan is still an interesting analogue to early Earth.

“We know that Titan has rich organic material, very complex organic material on the surface; there’s energy in the form of sunlight; and we know there’s been water on the surface in the past. These ingredients, that we know are necessary for the development life as we know it are sitting on the surface on Titan,” said principal investigator Elizabeth Turtle. “They’ve been doing chemistry experiments, basically, for hundreds of millions of years, and Dragonfly is designed to go pick up the results of those experiments.”

Don’t expect a flourishing race of methane-dwelling microbes, though. It’s more like going back in time to pre-life Earth to see what conditions may have resulted in the earliest complex self-replicating molecules: the origin of the origin of life, if you will.

dragonfly model

Principal investigator Elizabeth Turtle shows off a 1/4 scale model of the Dragonfly craft.

To do so Dragonfly, true to its name, will be flitting around the surface to collect data from many different locations. It may seem that something the size of a couch may have trouble lifting off, but as Turtle explained, it’s actually a lot easier to fly around Titan than to roll. With a far thicker atmosphere (mostly nitrogen, like ours) and a fraction of Earth’s gravity, it’ll be more like traveling through water than air.

That explains why its rotors are so small — for something that big on Earth, you’d need huge powerful rotors working full time. But even one of these little rotors can shift the craft if necessary (though they’ll want all eight for lift and redundancy).

We’ll learn more soon, no doubt. This is just the opening salvo from NASA on what will surely be years of further highlights, explanations, and updates on Dragonfly’s creation and launch.

“It’s remarkable to think of this rotorcraft flying miles and miles across the organic sand dunes of Saturn’s largest moon, exploring the processes that shape this extraordinary environment,” said NASA associate administrator for science Thomas Zurbuchen. “Titan is unlike any other place in the solar system, and Dragonfly is like no other mission.”

Tiny Robobee X-Wing powers its flight with light

We’ve seen Harvard’s Robobee flying robot evolve for years: After first learning to fly, it learned to swim in 2015, then to jump out of the water again in 2017 — and now it has another trick up its non-existent sleeve. The Robobee X-Wing can fly using only the power it collects from light hitting its solar cells, making it possible to stay in the air indefinitely.

Achieving flight at this scale is extremely hard. You might think that being small, it would be easy to take off and maintain flight, like an insect does. But self-powered flight actually gets much harder the smaller, which puts insects among the most bafflingly marvelous feats of engineering we have encountered in nature.

Oh, it’s easy enough to fly when you have a wire feeding you electricity to power a pair of tiny wings — and that’s how the Robobee and others flied before. It’s only very recently that researchers have accomplished meaningful flight using on-board power or, in one case, a laser zapping an attached solar panel.

robobee chartThe new Robobee X-Wing (named for its 4-wing architecture) achieves a new milestone with the ability to fly with no battery and no laser — only plain full-spectrum light coming from above. Brighter than sunlight, to be fair — but close to real-world conditions.

The team at Harvard’s Microrobotics Laboratory accomplished this by making the power conversion and wing mechanical systems incredibly lightweight — the whole thing weighs about a quarter of a gram, or about half a paper clip. Its power consumption is likewise lilliputian:

Consuming only 110–120 milliwatts of power, the system matches the thrust efficiency of similarly sized insects such as bees. This insect-scale aerial vehicle is the lightest thus far to achieve sustained untethered flight (as opposed to impulsive jumping or liftoff).

That last bit is some shade thrown at its competitors, which by nature can’t quite achieve “sustained untethered flight,” though what constitutes that isn’t exactly clear. After all, this Dutch flapping flyer can go a kilometer on battery power. If that isn’t sustained, I don’t know what is.

In the video of the Robobee you can see that when it is activated, it shoots up like a bottle rocket. One thing they don’t really have space for on the robot’s little body (yet) is sophisticated flight control electronics and power storage that could let it use only the energy it needs, flapping in place.

That’s probably the next step for the team, and it’s a non-trivial one: adding weight and new systems completely changes the device’s flight profile. But give them a few months or a year and this thing will be hovering like a real dragonfly.

The Robobee X-Wing is exhaustively described in a paper published in the journal Nature.

Oppo shows first under-screen camera in bid to eliminated the hated notch

Ever since the notch was first added to smartphones, everyone in the world except the deeply deluded and my editor have wished it gone. Oppo has done it — or at least shown that it can be done — with a demonstration unit at Mobile World Congress in Shanghai. iPhone users can console themselves that Oppo kind of sounds like Apple.

Oppo and Xiaomi both teased their upcoming under-screen cameras in recent weeks, but it’s one thing to put out a video and quite another to show a working model to the public. And Oppo’s device was unmistakably present in Shanghai.

usc

Unfortunately, if you were hoping that the first device would knock it out of the park… not quite. Eyes-on photos and impressions from Engadget China show that the transparent LCD used to cover the camera assembly is, or can be, noticeably different from its surroundings. Of course the team there was trying to capture it, and from straight on when you’re not looking for it this effect may not be particularly pronounced. But it’s there.

The camera itself, since it loses a lot of incoming light to the LCD layer, has a larger sensor with bigger pixels on it to better capture that light. This suggests a lower resolution for the unit than other front-facing cameras, and obviously shooting through an extra layer will reduce sharpness and increase artifacting. Oppo says it is working on reducing these in software, but there’s only so much you can do. The sample photos don’t look so hot.

It’s not going to set the world on fire, but Oppo’s less visible camera is a step towards a notchless future, and that I can support. No word on when it’ll actually be available for purchase, or in what models — perhaps Xiaomi will take the opportunity to announce its under-screen camera with a few more of the relevant details.

This robot crawls along wind turbine blades looking for invisible flaws

Wind turbines are a great source of clean power, but their apparent simplicity — just a big thing that spins — belie complex systems that wear down like any other, and can fail with disastrous consequences. Sandia National Labs researchers have created a robot that can inspect the enormous blades of turbines autonomously, helping keep our green power infrastructure in good kit.

The enormous towers that collect energy from wind currents are often only in our view for a few minutes as we drive past. But they must stand for years through inclement weather, temperature extremes, and naturally — being the tallest things around — lightning strikes. Combine that with normal wear and tear and it’s clear these things need to be inspected regularly.

But such inspections can be both difficult and superficial. The blades themselves are among the largest single objects manufactured on the planet, and they’re often installed in distant or inaccessible areas, like the many you see offshore.

“A blade is subject to lightning, hail, rain, humidity and other forces while running through a billion load cycles during its lifetime, but you can’t just land it in a hanger for maintenance,” explained Sandia’s Joshua Paquette in a news release. In other words, not only do crews have to go to the turbines to inspect them, but they often have to do those inspections in place — on structures hundreds of feet tall and potentially in dangerous locations.

Using a crane is one option, but the blade can also be orientated downwards so an inspector can rappel along its length. Even then the inspection may be no more than eyeballing the surface.

“In these visual inspections, you only see surface damage. Often though, by the time you can see a crack on the outside of a blade, the damage is already quite severe,” said Paquette.

Obviously better and deeper inspections are needed, and that’s what the team decided to work on, with partners International Climbing Machines and Dophitech. The result is this crawling robot, which can move along a blade slowly but surely, documenting it both visually and using ultrasonic imaging.

A visual inspection will see cracks or scuffs on the surface, but the ultrasonics penetrate deep into the blades, making them capable of detecting damage to interior layers well before it’s visible outside. And it can do it largely autonomously, moving a bit like a lawnmower: side to side, bottom to top.

Of course at this point it does it quite slowly and requires human oversight, but that’s because it’s fresh out of the lab. In the near future teams could carry around a few of these things, attach one to each blade, and come back a few hours or days later to find problem areas marked for closer inspection or scanning. Perhaps a crawler robot could even live onboard the turbine and scurry out to check each blade on a regular basis.

Another approach the researchers took was drones — a natural enough solution, since the versatile fliers have been pressed into service for inspection of many other structures that are dangerous for humans to get around: bridges, monuments, and so on.

These drones would be equipped with high-resolution cameras and infrared sensors that detect the heat signatures in the blade. The idea is that as warmth from sunlight diffuses through the material of the blade, it will do so irregularly in spots where damage below the surface has changed its thermal properties.

As automation of these systems improves, the opportunities open up: A quick pass by a drone could let crews know whether any particular tower needs closer inspection, then trigger the live-aboard crawler to take a closer look. Meanwhile the humans are on their way, arriving to a better picture of what needs to be done, and no need to risk life and limb just to take a look.


Source: Gadgets

Crowdfunded spacecraft LightSail 2 prepares to go sailing on sunlight

Among the many spacecraft and satellites ascending to space on Monday’s Falcon Heavy launch, the Planetary Society’s LightSail 2 may be the most interesting. If all goes well, a week from launch it will be moving through space — slowly, but surely — on nothing more than the force exerted on it by sunlight.

LightSail 2 doesn’t have solar-powered engines, or use solar energy or heat for some secondary purpose; it will literally be propelled by the physical force of photons hitting its immense shiny sail. Not solar wind, mind you — that’s a different thing altogether.

It’s an idea, explained Planetary Society CEO and acknowledged Science Guy Bill Nye said in a press call ahead of the launch, that goes back centuries.

“It really goes back to the 1600s,” he said; Kepler deduced that a force from the sun must cause comet tails and other effects, and “he speculated that brave people would one day sail the void.”

So they might, as more recent astronomers and engineers have pondered the possibility more seriously.

“I was introduced to this in the 1970s, in the disco era. I was in Carl Sagan’s astronomy class… wow, 42 years ago, and he talked about solar sailing,” Nye recalled. “I joined the Planetary Society when it was formed in 1980, and we’ve been talking about solar sails around here ever since then. It’s really a romantic notion that has tremendous practical applications; there are just a few missions that solar sails are absolutely ideal for.”

Those would primarily be long-term, medium-orbit missions where a craft needs to stay in an Earth-like orbit, but still get a little distance away from the home planet — or, in the future, long-distance missions where slow and steady acceleration from the sun or a laser would be more practical than another propulsion method.

Mission profile

The eagle-eyed among you may have spotted the “2” in the name of the mission. LightSail 2 is indeed the second of its type; the first launched in 2015, but was not planned to be anything more than a test deployment that would burn up after a week or so.

That mission had some hiccups, with the sail not deploying to its full extent and a computer glitch compromising communications with the craft. It was not meant to fly via solar sailing, and did not.

“We sent the CubeSat up, we checked out the radio, the communications, the overall electronics, and we deployed the sail and we got a picture of that deployed sail in space,” said COO Jennifer Vaughn. “That was purely a deployment test; no solar sailing took place.”

The spacecraft itself, minus the sail, of course.

But it paved the way for its successor, which will attempt this fantastical form of transportation. Other craft have done so, most notably JAXA’s IKAROS mission to Venus, which was quite a bit larger — though as LightSail 2’s creators pointed out, not nearly as efficient as their craft — and had a very different mission.

The brand new spacecraft, loaded into a 3U CubeSat enclosure — that’s about the size of a loaf of bread — is piggybacking on an Air Force payload going up to an altitude of about 720 kilometers. There it will detach and float freely for a week to get away from the rest of the payloads being released.

Once it’s safely on its own, it will fire out from its carrier craft and begin to unfurl the sail. From that loaf-sized package will emerge an expanse of reflective Mylar with an area of 32 square meters — about the size of a boxing ring.

Inside the spacecraft’s body is also what’s called a reaction wheel, which can be spun up or slowed down in order to impart the opposite force on the craft, causing it to change its attitude in space. By this method LightSail 2 will continually orient itself so that the photons striking it propel it in the desired direction, nudging it into the desired orbit.

1 HP (housefly power) engine

The thrust produced, the team explained, is very small — as you might expect. Photons have no mass, but they do (somehow) have momentum. Not a lot, to be sure, but it’s greater than zero, and that’s what counts.

“In terms of the amount of force that solar pressure is going to exert on us, it’s on the micronewton level,” said LightSail project manager Dave Spencer. “It’s very tiny compared to chemical propulsion, very small even compared to electric propulsion. But the key for solar sailing is that it’s always there.”

“I have many numbers that I love,” cut in Nye, and detailed one of them: “It’s nine micronewtons per square meter. So if you have 32 square meters you get about a hundred micronewtons. It doesn’t sound like much, but as Dave points out, it’s continuous. Once a rocket engine stops, when it runs out of fuel, it’s done. But a solar sail gets a continuous push day and night. Wait…” (He then argued with himself about whether it would experience night — it will, as you see in the image below.)

Bruce Betts, chief scientist for LightSail, chimed in as well, to make the numbers a bit more relatable: “The total force on the sail is approximately equal to the weight of a house fly on your hand on Earth.”

Yet if you added another fly every second for hours at a time, pretty soon you’ve got a really considerable amount of acceleration going on. This mission is meant to find out whether we can capture that force.

“We’re very excited about this launch,” said Nye, “because we’re going to get to a high enough altitude to get away from the atmosphere, far enough that we’ll really gonna be able to build orbital energy and take some, I hope, inspiring pictures.”

Second craft, same (mostly) as the last

The LightSail going up this week has some improvements over the last one, though overall it’s largely the same — and a relatively simple, inexpensive craft at that, the team noted. Crowdfunding and donations over the last decade have provided quite a bit of cash to pursue this project, but it still is only a small fraction of what NASA might have spent on a similar mission, Spencer pointed out.

“This mission is going to be much more robust than the previous LightSail 1, but as we said previously, it’s done by a small team,” he said. “We’ve had a very small budget relative to our NASA counterparts, probably 1/20th of the budget that a similar NASA mission would have. It’s a low-cost spacecraft.”

Annotated image of LightSail 2, courtesy of Planetary Society.

But the improvements are specifically meant to address the main problems encountered by LightSail 2’s predecessor.

Firstly, the computer inside has been upgraded to be more robust (though not radiation-hardened) and given the ability to sense faults and reboot if necessary — they won’t have to wait, as they did for LightSail 1, for a random cosmic ray to strike the computer and cause a “natural reboot.” (Yes, really.)

The deployment of the sail itself has also improved. The previous one only extended to about 90% of its full width and couldn’t be adjusted after the fact. Subsequently tests have been done, Betts told me, to exactly determine how many revolutions the motor must make to extend the sail to 100%. Not only that, but they have put markings on the extending booms or rods that will help double check how deployment has gone.

“We also have the capability on orbit, if it looks like it’s not fully extended, we can extend it a little bit more,” he said.

Once it’s all out there, it’s uncharted territory. No one has attempted to do this kind of mission, even IKAROS, which had a totally different flight profile. The team is hoping their sensors and software are up to the task — and it should be clear whether that’s the case within a few hours of unfurling the sail.

It’s still mainly an experiment, of course, and what the team learns from this they will put into any future LightSail mission they attempt, but also share it with the spaceflight community and others attempting to sail on sunlight.

“We all know each other and we all share information,” said Nye. “And it really is — I’ve said it as much as I can — it’s really exciting to be flying this thing at last. It’s almost 2020 and we’ve been talking about it for, well, for 40 years. It’s very, very cool.”

LightSail 2 will launch aboard a SpaceX Falcon Heavy no sooner than June 24th. Keep an eye on the site for the latest news and a link to the live stream when it’s almost time for takeoff.


Source: Gadgets

Hasselblad’s new medium format camera is a tiny, beautiful nod to history

While mirrorless cameras accelerate into the future, medium format models are hearkening unto the past — and Hasselblad is chief among them. Its new digital back fits lenses going back to the ’50s, and the tiny 907X camera body is about as lovely a throwback as one can imagine.

The new set of systems, announced today, are somewhat different from what most people are used to. Most interchangeable-lens systems, like Canon and Nikon’s DSLRs and Olympus and Fujifilm’s mirrorless cameras, generally have two parts: a lens and a body, in the latter of which is found the image sensor.

Hasselblad does make cameras like that, and in fact introduced a dandy-looking new one today, the X1D II 50C (just try to keep track of these names). But the more interesting item by far to me is the CFV II digital back and 907X camera body.

Unlike a traditional DSLR, digital backs are essentially just giant sensors; they fit where the medium format film would have gone and collect light in its place. But they also need a camera unit to do the heavy lifting of parsing all those pixels — about 50 million of them in this case.

What’s nice about this is that you can attach a modern back and camera unit to a lens decades old — you could also attach a modern one, but why? Part of the fun of medium format is using equipment from the distant past, and shooting in some ways the same way someone might have shot a century ago.

The system Hasselblad introduced today is one of the most compact you’ll find, packing all the processing power needed into an enclosure that’s hardly bigger than the lens itself. On the back of it is a high-resolution touchscreen that flips out to 45 and 90 degree angles, letting you shoot top-down or from an angle, like the old days.

It may seem a mere nostalgia bid, but it’s an interesting way to shoot and is more focused on careful composition than spontaneous captures. And brother, is it handsome, as you can see above. (The top picture shows the camera rotated so you can see the screen — normally it would face away from the lens.)

Pricing and availability are to be announced, but this won’t be cheap — think in the $4,000-$6,000 range for the two pieces.

I probably will never own one, but I’m satisfied to know that there is a shooting experience out there that emulates the old medium format style so closely, and not just superficially. It’s a lovely piece of hardware and if Hasselblad’s record is any indication, it’ll take lovely photos.


Source: Gadgets

Tripping grad students over and over for science (and better prosthetic limbs)

Prosthetic limbs are getting better, but not as quickly as you’d think. They’re not as smart as our real limbs, which (directed by the brain) do things like automatically stretch out to catch ourselves when we fall. This particular “stumble reflex” was the subject of an interesting study at Vanderbilt that required its subjects to fall down… a lot.

The problem the team is aiming to help alleviate is simply that users of prosthetic limbs fall, as you might guess, more than most, and when they do fall, it can be very difficult to recover, because an artificial leg — especially for above-the-knee amputations — doesn’t react the same way a natural leg would.

The idea, explained lead researcher and mechanical engineering Professor Michael Goldfarb, is to determine what exactly goes into a stumble response and how to recreate that artificially.

“An individual who stumbles will perform different actions depending on various factors, not all of which are well known. The response changes, because the strategy that is most likely to prevent a fall is highly dependent on the ‘initial conditions’ at the time of stumble,” he told TechCrunch in an email. “We are hoping to construct a model of which factors determine the nature of the stumble response, so when a stumble occurs, we can use the various sensors on a robotic prosthetic leg to artificially reconstruct the reflex in order to provide a response that is effective and consistent with the biological reflex loop.”

The experimental setup looked like this. Subjects were put on a treadmill and told to walk forward normally; a special pair of goggles prevented them from looking down, arrows on a display kept them going straight, and a simple mental task (count backwards by sevens) kept their brain occupied.

Meanwhile an “obstacle delivery apparatus” bode its time, waiting for the best opportunity to slip a literal stumbling block onto the treadmill for the person to trip over.

When this happened, the person inevitably stumbled, though a harness prevented them from actually falling and hurting themselves. But as they stumbled, their movements were captured minutely by a motion capture rig.

After 196 stumbling blocks and 190 stumbles, the researchers had collected a great deal of data on how exactly people move to recover from a stumble. Where do their knees go relative to their ankles? How do they angle their feet? How much force is taken up by the other foot?

Exactly how this data would be integrated with a prosthesis is highly dependent on the nature of the artificial limb and the conditions of the person using it. But having this data, and perhaps feeding it to a machine learning model, will help expose patterns that can be used to inform emergency prosthetic movements.

It could also be used for robotics: “The model could be used directly to program reflexes in a biped,” said Goldfarb. Those human-like motions we see robots undertaking could be even more human when directly based on the original. There’s no rush there — they might be a little too human already.

The research describing the system and the data set, which they’re releasing for free to anyone who’d like to use it, appeared in the Journal of NeuroEngineering and Rehabilitation.


Source: Gadgets

NASA’s X-59 supersonic jet will have a 4K TV instead of a forward window

NASA’s X-59 QueSST experimental quiet supersonic aircraft will have a cockpit like no other — featuring a big 4K screen where you’d normally have a front window. Why? Because this is one weird-looking plane.

The X-59, which is being developed by Lockheed Martin on a $247 million budget, is meant to go significantly faster than sound without producing a sonic boom, or indeed any noise “louder than a car door closing,” at least to observers on the ground.

Naturally in order to do this the craft has to be as aerodynamic as possible, which precludes the cockpit bump often found in fighter jets. In fact, the design can’t even have the pilot up front with a big window, because it would likely be far too narrow. Check out these lines:

The cockpit is more like a section taken out of the plane just over the leading edge of the rather small and exotically shaped wings. So while the view out the sides will be lovely, the view forward would be nothing but nose.

To fix that, the plane will be equipped with several displays, the lower ones just like you might expect on a modern aircraft, but the top one is a 4K monitor that’s part of what’s called the eXternal Visibility System, or XVS. It shows imagery stitched together from two cameras on the craft’s exterior, combined with high-definition terrain data loaded up ahead of time.

It’s not quite the real thing, but pilots spend a lot of time in simulators (as you can see here), so they’ll be used to it. And the real world is right outside the other windows if they need a reality check.

Lockheed and NASA’s plane is currently in the construction phase, though no doubt some parts are still being designed, as well. The program has committed to a 2021 flight date, an ambitious goal considering this is the first such experimental, or X-plane, the agency has developed in some 30 years. If successful, it could be the precursor to other quiet supersonic craft and could bring back supersonic overland flight in the future.

That’s if Boom doesn’t beat them to it.


Source: Gadgets

The Geesaa automates (but overcomplicates) pourover coffee

Making pourover coffee is a cherished ritual of mine on most mornings. But there are times I wish I could have a single cup of pourover without fussing about the kitchen — and the Geesaa, a new gadget seeking funds on Kickstarter, lets me do that. But it’s definitely still a ways from being a must-have.

I’m interested in alternative coffee preparation methods, low and high tech, so I was happy to agree to try out the Geesaa when they contacted me just ahead of their Kickstarter campaign going live (they’ve already hit their goal at this point). I got to test one of their prototypes and have used it on and off for the last couple of weeks.

The Geesaa is part of a new wave of coffee makers that make advances on traditional drip techniques, attempting to get closer to a manual pourover. That usually means carefully controlling the water temperature and dispensing it not just in a stream powerful enough to displace and churn the ground coffee, but in a pattern that’s like what you’d do if you were pouring it by hand. (The Automatica, another one with a similar idea, sadly didn’t make it.)

Various manufacturers do this in various ways, so Geesaa isn’t exactly alone, though its mechanism appears to be unique. Instead of using a little showerhead that drips regularly over the grounds, or sending a moving stream in a spiral, the Geesaa spins the carafe and pours water from a moving head above it.

This accomplishes the kind of spiral pour that you’ll see many a barista doing, making sure the grounds are all evenly wet and agitated, without creating too thin of a slurry (sounds delicious, right?). And in fact that’s just what the Geesaa does — as long as you get the settings right.

Like any gadget these days, this coffee maker is “smart” in that it has a chip and memory inside, but not necessarily smart in any other way. This one lets you select from a variety of “recipes” supposedly corresponding to certain coffees that Geesaa, as its secondary business model, will sell to owners in perfectly measured packets. The packet will come with an NFC card that you just tap on the maker to prompt it to start with those settings.

It’s actually a good idea, but more suited to a hotel room than a home. I preferred to use the app, which, while more than a little overcomplicated, lets you design your own recipes with an impressive variety of variables. You can customize water temperature, breaks between pouring “stages,” the width of the spiral pattern, the rate the water comes out and more.

Although it’s likely you’d just arrive at a favorite recipe or two, it’s nice to be able to experiment or adjust in case of guests, a new variety of coffee, or a new grinder. You can, as I did, swap out the included carafe for your own cone and mug, or a mesh cone, or whatever — as long as it’s roughly the right size, you can make it work. There’s no chip restricting you to certain containers or coffees.

I’m not sure what the story is with the name, by the way. When you start it up, the little screen says “Coffee Dancer,” which seems like a better English name for the device than Geesaa, but hey.

When it works, it works, but there are still plenty of annoyances that you won’t get with a kettle and a drip cone. Bear in mind, this is with a prototype (third generation, but still) device and app still in testing.

One thing I’ve noticed is that the temperature seems too low in general. Even the highest available temperature, 97 C (around 206 F), doesn’t seem as hot as it should. Built-in recipes produced coffee that seemed only warm, not hot. Perhaps the water cools as it travels along the arm and passes through the air — this is nontrivial when you’re talking about little droplets! So by the time it gets to the coffee it may be lower than you’d like, while coming out of a kettle it will almost always be about as hot as it can get. (Not that you want the hottest water possible, but too cool is as much a problem as too hot.)

I ran out of filters for the included carafe so I used my gold Kone filter, which worked great.

The on-device interface is pretty limited, with a little dial and LCD screen that displays two lines at a time. It’s pre-loaded with a ton of recipes for coffee types you may never see (what true coffee-lover orders pre-ground single-serve packets?), and the app is cluttered with ways to fill out taste profiles, news and things that few people seem likely to take advantage of. Once you’ve used a recipe you can call it up from the maker itself, at least.

One time I saw the carafe was a bit off-center when it started brewing, and when I adjusted it, the spinning platform just stopped and wouldn’t restart. Another time the head didn’t move during the brewing process, just blasting the center of the grounds until the cone was almost completely full. (You can of course stop the machine at any point and restart it should something go wrong.)

Yet when it worked, it was consistently good coffee and much quicker than my standard manual single cup process.

Aesthetically it’s fine — modern and straightforward, though without the elegance one sees in Bodum and Ratio’s design.

It comes in white, too. You know, for white kitchens.

The maker itself is quite large — unnecessarily so, I feel — though I know the base has to conceal the spinning mechanism and a few other things. But at more than a foot wide and eight inches deep, and almost a foot tall, it has quite a considerable footprint, larger than many another coffee machines.

I feel like the Geesaa is a good coffee-making mechanism burdened by an overcomplicated digital interface. I honestly would have preferred mechanical dials on the maker itself, one each for temperature, amount and perhaps brew style (all at once, bloom first, take a break after 45 seconds, etc). Maybe something to control its spiral width too.

And of course at $700 (at the currently available pledge level) this thing is expensive as hell. The comparisons made in the campaign pitch aren’t really accurate — you can get an excellent coffee maker like a Bonnavita for $150, and of course plenty for less than that.

At $700, and with this thing’s capabilities, and with the side hustle of selling coffee packets, this seems like a better match for a boutique hotel room or fancy office kitchen than an ordinary coffee lover’s home. I enjoy using it, but its bulk and complexity are antithetical to the minimal coffee-making experience I have enjoyed for years. Still, it’s cool to see weird new coffee-making methods appear, and if you’re interested, you can still back it on Kickstarter for the next week or so.


Source: Gadgets