How NASA might protect tomorrow’s astronauts from deep space radiation

There are a million and one ways to die in space, whether it’s from micrometeoroid impacts shredding your ship or solar flares frying its electronics, drowning in your own sweat during a spacewalk or having a cracked coworker push you out an airlock. …

Twitch clarifies its self-harm policy

Twitch has been tightening its content policies in recent months, and that now includes mentions of self-harm. The livestreaming service has updated its Community Guidelines to include examples of the self-harm behavior it doesn’t allow. The clarified policy is meant to foster “meaningful conversation” about mental and physical health while preventing further harm.

Broadcasters can share stories of self-harm or suicide, but can’t describe them in “graphic detail” or share suicide notes. Studies show this could lead to similar thoughts among vulnerable people, Twitch said. The refined policy also singles out content that encourages eating disorders, such as unhealthy weight loss programs and attempts to glorify common eating disorder habits.

The move comes relatively soon after Twitch clamped down on usernames referencing hard drugs and sex, as well as creators who routinely spread misinformation. Not long after, the Amazon brand rolled out improved reporting tools to help viewers flag inappropriate content while providing a streamlined appeals process. Twitch has dealt with abuses in the weeks since, but it’s clearly hoping the policy changes will reduce the volume of incidents going forward.

In the US, the National Suicide Prevention Lifeline is 1-800-273-8255. Crisis Text Line can be reached by texting HOME to 741741 (US), 686868 (Canada), or 85258 (UK). Wikipedia maintains a list of crisis lines for people outside of those countries.

The largest bacterium discovered is visible to the naked eye

When you hear the word “bacteria,” you probably picture organisms that couldn’t be seen unless they’re placed under a microscope. A bacterium that has now been classified as the largest in the world ever discovered, however, needs no special tools to be visible to the naked eye. Thiomargarita magnifica, as it’s called, takes on a filament-like appearance and can be as long as a human eyelash. As the BBC notes, that makes it bigger than some more complex organisms, such as tiny flies, mites and worms. It was first discovered by marine biologist Olivier Gros living on sunken mangrove tree leaves in the French Caribbean back in 2009. 

Due to the organism’s size, Gros first thought he was looking at a eukaryote rather than simpler prokaryotic organisms like bacteria. It wasn’t until he got back to his laboratory that he found out that it wasn’t the case at all. Years later, Jean-Marie Volland and his team at the Lawrence Berkeley National Laboratory in California took a closer look at the bacterium using various techniques, such as transmission electron microscopy, to confirm that it is indeed a single-cell organism. They’ve recently published a paper describing the centimeter-long bacterium in Science.

Volland said T. magnifica is “5,000 times bigger than most bacteria” and is comparable to an average person “encountering another human as tall as Mount Everest.” One other information Volland’s team has discovered is that the bacterium keeps its DNA organized within a structure that has a membrane. In most bacteria, DNA materials just float freely in their cytoplasm. Further, it has around 6,000 billion bases of DNA. “For comparison, a diploid human genome is approximately six giga (billion) bases in size. So this means that our Thiomargarita stores several orders of magnitude more DNA in itself as compared to a human cell,” said team member Tanja Woyke. 

While the scientists know that T. magnifica grows on top of mangrove sediments in the Caribbean and that it creates energy to live using chemosynthesis, which is similar to photosynthesis in plants, there’s still a lot about it that remains a mystery. And it’ll likely take some time before the scientists can discover its secrets: They have yet to figure out how to grow the organism in the lab, so Gros has to gather samples every time they want to run an experiment. It doesn’t help that the organism has an unpredictable life cycle. Gros told The New York Times that he couldn’t even find any over the past two months. 

Volland and his team now aim to find a way to grow T. magnifica in the lab. As for Gros, he now expects other teams to go off in search of even bigger bacteria, which like T. magnifica, may also be hiding in plain sight.

Meta’s latest auditory AIs promise a more immersive AR/VR experience

The Metaverse, as Meta CEO Mark Zuckerberg envisions it, will be a fully immersive virtual experience that rivals reality, at least from the waist up. But the visuals are only part of the overall Metaverse experience.

“Getting spatial audio right is key to delivering a realistic sense of presence in the metaverse,” Zuckerberg wrote in a Friday blog post. “If you’re at a concert, or just talking with friends around a virtual table, a realistic sense of where sound is coming from makes you feel like you’re actually there.”

That concert, the blog post notes, will sound very different if performed in a full-sized concert hall than in a middle school auditorium on account of the differences between their physical spaces and acoustics. As such, Meta’s AI and Reality Lab (MAIR, formerly FAIR) is collaborating with researchers from UT Austin to develop a trio of open source audio “understanding tasks” that will help developers build more immersive AR and VR experiences with more lifelike audio.

The first is MAIR’s Visual Acoustic Matching model, which can adapt a sample audio clip to any given environment using just a picture of the space. Want to hear what the NY Philharmonic would sound like inside San Francisco’s Boom Boom Room? Now you can. Previous simulation models were able to recreate a room’s acoustics based on its layout — but only if the precise geometry and material properties were already known — or from audio sampled within the space, neither of which produced particularly accurate results.

MAIR’s solution is the Visual Acoustic Matching model, called AViTAR, which “learns acoustic matching from in-the-wild web videos, despite their lack of acoustically mismatched audio and unlabeled data,” according to the post.

“One future use case we are interested in involves reliving past memories,” Zuckerberg wrote, betting on nostalgia. “Imagine being able to put on a pair of AR glasses and see an object with the option to play a memory associated with it, such as picking up a tutu and seeing a hologram of your child’s ballet recital. The audio strips away reverberation and makes the memory sound just like the time you experienced it, sitting in your exact seat in the audience.”

MAIR’s Visually-Informed Dereverberation mode (VIDA), on the other hand, will strip the echoey effect from playing an instrument in a large, open space like a subway station or cathedral. You’ll hear just the violin, not the reverberation of it bouncing off distant surfaces. Specifically, it “learns to remove reverberation based on both the observed sounds and the visual stream, which reveals cues about room geometry, materials, and speaker locations,” the post explained. This technology could be used to more effectively isolate vocals and spoken commands, making them easier for both humans and machines to understand.

VisualVoice does the same as VIDA but for voices. It uses both visual and audio cues to learn how to separate voices from background noises during its self-supervised training sessions. Meta anticipates this model getting a lot of work in the machine understanding applications and to improve accessibility. Think, more accurate subtitles, Siri understanding your request even when the room isn’t dead silent or having the acoustics in a virtual chat room shift as people speaking move around the digital room. Again, just ignore the lack of legs.

“We envision a future where people can put on AR glasses and relive a holographic memory that looks and sounds the exact way they experienced it from their vantage point, or feel immersed by not just the graphics but also the sounds as they play games in a virtual world,” Zuckerberg wrote, noting that AViTAR and VIDA can only apply their tasks to the one picture they were trained for and will need a lot more development before public release. “These models are bringing us even closer to the multimodal, immersive experiences we want to build in the future.”

Take a first look at Formula E’s new Gen3 car in action

Formula E recently showed off its latest Gen3 car that it says is faster, more agile and “the world’s most efficient” racing vehicle to date. Now, we’re getting a first look at one on a track at England’s Goodwood in the form of the Mahinda M9 Electro with Nick Heidfeld at the wheel. 

On its Twitter account, Goodwood said that Heidfeld was “not holding back” and it looked like the car made a clean lap other than a few minor lockups. On track, the Gen3 design certainly looks more subdued and less dramatic than the Gen2, but it’s lighter (840kg compared to 920kg including driver) and quicker in every way.

The Gen3 model is very specifically designed for street circuit racing with high maneuverability and speeds up to 200 MPH. That’s not quite as fast as the 220-230 MPH top speeds for F1 cars, but the Formula E vehicles do that with less than half the power. They’re also highly efficient, with over double the regenerative braking capabilities of the Gen2 cars. Overall, they convert 90 percent of battery energy to mechanical power, compared to 52 percent for F1 cars. 

There are now 11 Gen3 teams confirmed with 22 cars, including DS Automobiles, Dragon/Penske, Envision, Mercedes-EQ, Avalanche Andretti, Jaguar, Maserati, NIO 333, Nissan and Porsche, along with Mahindra. The first season of Gen3 will kick off this winter with pre-season testing. 

Engadget Podcast: Apple’s baffling 13-inch MacBook Pro with M2

What’s so “Pro” about the new 13-inch MacBook Pro? Devindra and Cherlynn chat with Laptop Magazine’s Editor-in-Chief, Sherri L. Smith, about Apple’s confusing new ultraportable. Sure, the M2 chip makes it faster, but why does it have a worse screen and fewer features than the new MacBook Air? Are real professionals better off with the faster (but more expensive) 14-inch MacBook Pro? Also, they dive into the wild new VR headset prototypes from Meta, as well as Twitter’s reinvention of blogging.

Listen above, or subscribe on your podcast app of choice. If you’ve got suggestions or topics you’d like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcasts, the Morning After and Engadget News!

Subscribe!

Topics

  • Apple’s 13-inch MacBook Pro with M2 is a strange disappointment – 1:18

  • Meta’s VR prototypes seek to pass the “visual Turing test” – 22:59

  • Facebook Pay becomes Meta pay in hopes of becoming the metaverse’s digital wallet – 28:06

  • Microsoft phases out AI that can detect human emotions – 32:45

  • Amazon is working on a way to digitize the voice of your dead loved ones – 33:59

  • Twitter introduces b̶l̶o̶g̶g̶i̶n̶g̶ longform writing feature, Notes – 36:09

  • Carl Pei’s Nothing phone won’t be coming to the US – 42:22

  • Working on – 43:28

  • Pop culture picks – 46:03

Livestream

Credits
Hosts: Cherlynn Low and Devindra Hardawar
Guest: Sherri L. Smith, Editor-in-Chief, Laptop Magazine
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien
Livestream producers: Julio Barrientos
Graphic artists: Luke Brooks and Brian Oh

Cruise begins charging fares for its driverless taxi service in San Francisco

GM’s Cruise has started charging passengers for fully driverless rides in San Francisco. The company secured a driverless deployment permit from the California Public Utilities Commission (CPUC) earlier this month, making it the first in the industry to do so. That allows Cruise to charge for rides with no safety driver behind the wheel, though its vehicles are limited to select streets in the city. In addition, the company’s paid passenger service can only operate from 10PM to 6AM, and its cars can only drive at a max speed of 30 mph.

Another limitation is that its driverless vehicles aren’t allowed on highways and can’t operate during times of heavy fog and rain. Still, it’s a major milestone, not just for Cruise, but for the nascent robotaxi industry as a whole. Cruise’s permit allows it to operate a commercial driverless ride—hailing service with a fleet of up to 30 vehicles. It previously said that it will roll out fared rides gradually, and it reiterated that plan in its latest announcement, where it noted that it’s “inviting more people” into its driverless vehicles every week. The goal is to eventually be able to offer fared rides all day across the entire city.

Cruise received permission to offer the public robotaxi rides last year, but it could only do so for free. The company, along with Waymo, was finally allowed to charge passengers this March, as long as they were rides with safety drivers behind the wheel. While Waymo can’t charge for fully autonomous rides yet, it’s still the only other company that’s been granted a drivered deployment permit, based on CPUC’s list.