The largest bacterium discovered is visible to the naked eye

When you hear the word “bacteria,” you probably picture organisms that couldn’t be seen unless they’re placed under a microscope. A bacterium that has now been classified as the largest in the world ever discovered, however, needs no special tools to be visible to the naked eye. Thiomargarita magnifica, as it’s called, takes on a filament-like appearance and can be as long as a human eyelash. As the BBC notes, that makes it bigger than some more complex organisms, such as tiny flies, mites and worms. It was first discovered by marine biologist Olivier Gros living on sunken mangrove tree leaves in the French Caribbean back in 2009. 

Due to the organism’s size, Gros first thought he was looking at a eukaryote rather than simpler prokaryotic organisms like bacteria. It wasn’t until he got back to his laboratory that he found out that it wasn’t the case at all. Years later, Jean-Marie Volland and his team at the Lawrence Berkeley National Laboratory in California took a closer look at the bacterium using various techniques, such as transmission electron microscopy, to confirm that it is indeed a single-cell organism. They’ve recently published a paper describing the centimeter-long bacterium in Science.

Volland said T. magnifica is “5,000 times bigger than most bacteria” and is comparable to an average person “encountering another human as tall as Mount Everest.” One other information Volland’s team has discovered is that the bacterium keeps its DNA organized within a structure that has a membrane. In most bacteria, DNA materials just float freely in their cytoplasm. Further, it has around 6,000 billion bases of DNA. “For comparison, a diploid human genome is approximately six giga (billion) bases in size. So this means that our Thiomargarita stores several orders of magnitude more DNA in itself as compared to a human cell,” said team member Tanja Woyke. 

While the scientists know that T. magnifica grows on top of mangrove sediments in the Caribbean and that it creates energy to live using chemosynthesis, which is similar to photosynthesis in plants, there’s still a lot about it that remains a mystery. And it’ll likely take some time before the scientists can discover its secrets: They have yet to figure out how to grow the organism in the lab, so Gros has to gather samples every time they want to run an experiment. It doesn’t help that the organism has an unpredictable life cycle. Gros told The New York Times that he couldn’t even find any over the past two months. 

Volland and his team now aim to find a way to grow T. magnifica in the lab. As for Gros, he now expects other teams to go off in search of even bigger bacteria, which like T. magnifica, may also be hiding in plain sight.

Meta’s latest auditory AIs promise a more immersive AR/VR experience

The Metaverse, as Meta CEO Mark Zuckerberg envisions it, will be a fully immersive virtual experience that rivals reality, at least from the waist up. But the visuals are only part of the overall Metaverse experience.

“Getting spatial audio right is key to delivering a realistic sense of presence in the metaverse,” Zuckerberg wrote in a Friday blog post. “If you’re at a concert, or just talking with friends around a virtual table, a realistic sense of where sound is coming from makes you feel like you’re actually there.”

That concert, the blog post notes, will sound very different if performed in a full-sized concert hall than in a middle school auditorium on account of the differences between their physical spaces and acoustics. As such, Meta’s AI and Reality Lab (MAIR, formerly FAIR) is collaborating with researchers from UT Austin to develop a trio of open source audio “understanding tasks” that will help developers build more immersive AR and VR experiences with more lifelike audio.

The first is MAIR’s Visual Acoustic Matching model, which can adapt a sample audio clip to any given environment using just a picture of the space. Want to hear what the NY Philharmonic would sound like inside San Francisco’s Boom Boom Room? Now you can. Previous simulation models were able to recreate a room’s acoustics based on its layout — but only if the precise geometry and material properties were already known — or from audio sampled within the space, neither of which produced particularly accurate results.

MAIR’s solution is the Visual Acoustic Matching model, called AViTAR, which “learns acoustic matching from in-the-wild web videos, despite their lack of acoustically mismatched audio and unlabeled data,” according to the post.

“One future use case we are interested in involves reliving past memories,” Zuckerberg wrote, betting on nostalgia. “Imagine being able to put on a pair of AR glasses and see an object with the option to play a memory associated with it, such as picking up a tutu and seeing a hologram of your child’s ballet recital. The audio strips away reverberation and makes the memory sound just like the time you experienced it, sitting in your exact seat in the audience.”

MAIR’s Visually-Informed Dereverberation mode (VIDA), on the other hand, will strip the echoey effect from playing an instrument in a large, open space like a subway station or cathedral. You’ll hear just the violin, not the reverberation of it bouncing off distant surfaces. Specifically, it “learns to remove reverberation based on both the observed sounds and the visual stream, which reveals cues about room geometry, materials, and speaker locations,” the post explained. This technology could be used to more effectively isolate vocals and spoken commands, making them easier for both humans and machines to understand.

VisualVoice does the same as VIDA but for voices. It uses both visual and audio cues to learn how to separate voices from background noises during its self-supervised training sessions. Meta anticipates this model getting a lot of work in the machine understanding applications and to improve accessibility. Think, more accurate subtitles, Siri understanding your request even when the room isn’t dead silent or having the acoustics in a virtual chat room shift as people speaking move around the digital room. Again, just ignore the lack of legs.

“We envision a future where people can put on AR glasses and relive a holographic memory that looks and sounds the exact way they experienced it from their vantage point, or feel immersed by not just the graphics but also the sounds as they play games in a virtual world,” Zuckerberg wrote, noting that AViTAR and VIDA can only apply their tasks to the one picture they were trained for and will need a lot more development before public release. “These models are bringing us even closer to the multimodal, immersive experiences we want to build in the future.”

Take a first look at Formula E’s new Gen3 car in action

Formula E recently showed off its latest Gen3 car that it says is faster, more agile and “the world’s most efficient” racing vehicle to date. Now, we’re getting a first look at one on a track at England’s Goodwood in the form of the Mahinda M9 Electro with Nick Heidfeld at the wheel. 

On its Twitter account, Goodwood said that Heidfeld was “not holding back” and it looked like the car made a clean lap other than a few minor lockups. On track, the Gen3 design certainly looks more subdued and less dramatic than the Gen2, but it’s lighter (840kg compared to 920kg including driver) and quicker in every way.

The Gen3 model is very specifically designed for street circuit racing with high maneuverability and speeds up to 200 MPH. That’s not quite as fast as the 220-230 MPH top speeds for F1 cars, but the Formula E vehicles do that with less than half the power. They’re also highly efficient, with over double the regenerative braking capabilities of the Gen2 cars. Overall, they convert 90 percent of battery energy to mechanical power, compared to 52 percent for F1 cars. 

There are now 11 Gen3 teams confirmed with 22 cars, including DS Automobiles, Dragon/Penske, Envision, Mercedes-EQ, Avalanche Andretti, Jaguar, Maserati, NIO 333, Nissan and Porsche, along with Mahindra. The first season of Gen3 will kick off this winter with pre-season testing. 

Engadget Podcast: Apple’s baffling 13-inch MacBook Pro with M2

What’s so “Pro” about the new 13-inch MacBook Pro? Devindra and Cherlynn chat with Laptop Magazine’s Editor-in-Chief, Sherri L. Smith, about Apple’s confusing new ultraportable. Sure, the M2 chip makes it faster, but why does it have a worse screen and fewer features than the new MacBook Air? Are real professionals better off with the faster (but more expensive) 14-inch MacBook Pro? Also, they dive into the wild new VR headset prototypes from Meta, as well as Twitter’s reinvention of blogging.

Listen above, or subscribe on your podcast app of choice. If you’ve got suggestions or topics you’d like covered on the show, be sure to email us or drop a note in the comments! And be sure to check out our other podcasts, the Morning After and Engadget News!

Subscribe!

Topics

  • Apple’s 13-inch MacBook Pro with M2 is a strange disappointment – 1:18

  • Meta’s VR prototypes seek to pass the “visual Turing test” – 22:59

  • Facebook Pay becomes Meta pay in hopes of becoming the metaverse’s digital wallet – 28:06

  • Microsoft phases out AI that can detect human emotions – 32:45

  • Amazon is working on a way to digitize the voice of your dead loved ones – 33:59

  • Twitter introduces b̶l̶o̶g̶g̶i̶n̶g̶ longform writing feature, Notes – 36:09

  • Carl Pei’s Nothing phone won’t be coming to the US – 42:22

  • Working on – 43:28

  • Pop culture picks – 46:03

Livestream

Credits
Hosts: Cherlynn Low and Devindra Hardawar
Guest: Sherri L. Smith, Editor-in-Chief, Laptop Magazine
Producer: Ben Ellman
Music: Dale North and Terrence O’Brien
Livestream producers: Julio Barrientos
Graphic artists: Luke Brooks and Brian Oh

Cruise begins charging fares for its driverless taxi service in San Francisco

GM’s Cruise has started charging passengers for fully driverless rides in San Francisco. The company secured a driverless deployment permit from the California Public Utilities Commission (CPUC) earlier this month, making it the first in the industry to do so. That allows Cruise to charge for rides with no safety driver behind the wheel, though its vehicles are limited to select streets in the city. In addition, the company’s paid passenger service can only operate from 10PM to 6AM, and its cars can only drive at a max speed of 30 mph.

Another limitation is that its driverless vehicles aren’t allowed on highways and can’t operate during times of heavy fog and rain. Still, it’s a major milestone, not just for Cruise, but for the nascent robotaxi industry as a whole. Cruise’s permit allows it to operate a commercial driverless ride—hailing service with a fleet of up to 30 vehicles. It previously said that it will roll out fared rides gradually, and it reiterated that plan in its latest announcement, where it noted that it’s “inviting more people” into its driverless vehicles every week. The goal is to eventually be able to offer fared rides all day across the entire city.

Cruise received permission to offer the public robotaxi rides last year, but it could only do so for free. The company, along with Waymo, was finally allowed to charge passengers this March, as long as they were rides with safety drivers behind the wheel. While Waymo can’t charge for fully autonomous rides yet, it’s still the only other company that’s been granted a drivered deployment permit, based on CPUC’s list.

Codemasters breaks down how it made the cars in ‘F1 22’ sound like the real thing

EA’s Codemasters is making F1 2022 audio more realistic with an improved driver modes plus updates that make broadcast and car sounds more authentic, it revealed in its latest Developer Deep Dive video. It also unveiled its first licensed soundtrack with 33 songs from artists like Charli XCX, Hozier and Marshmellow. 

This year Formula 1 introduced all new cars that rely on floor tunnels to generate downforce and allow for tighter racing, along with all-new engines and more. F1 2022 is on top of those changes not just with the physics and visuals but also the sounds. To that end, the game has introduced all-new engine bundles based on the real vehicle sounds to give you the feeling of sitting in real Red Bull, Ferrari, Mercedes and other Formula 1 vehicles. 

“In a game like F1 22 the cars are the star so we want them to sound as authentic as possible. We record the actual cars every season and it’s important that we recreate the authenticity of the engines,” said audio director Brad Porter. “Players use the sound of the engine to drive the car so it’s important to get that across as accurately as possible.” 

That also includes touches like recording audio using the real headsets from team race engineers and simulating how things would sound to a driver inside a helmet. The developers also used mics that are very close to what announcers use in order accurately simulate the broadcast audio. 

That allowed the team to enhance the different sound modes available, including both Driver mode and Broadcast mode. The latter mode is designed to sound as close as possible to what you’d here on TV, Porter explained. It also enhanced the Cinematic mode to make it “larger than life” with “bespoke” touches like enhanced engine sounds, crowd noise and more. They’ve also added new settings to let players play with the mix of sounds more than ever.

Other new touches include the addition of Natalie Pinkham as a co-commentator, new recordings of all the announcers and authentic sounds from pit lane, garage and paddocks. Another big change is the addition of licensed music like you’ll find in other EA games, letting players choose between 33 songs from artists ranging from Charli XCX to Deadmaus to Diplo. “It is an accelerative soundtrack experience, designed to strap the player into the cockpit and driven by the unrivalled energy of the new era of Formula 1,” the development team said. 

SpaceX accuses Dish of ‘faulty’ analysis in ongoing battle over 5G spectrum

Dish’s plan to use 12GHz radio spectrum for its 5G network could drastically affect the Starlink satellite internet network, SpaceX said in a letter to the FCC. “If Dish’s lobbying efforts succeed, our study shows that Starlink customers will experience harmful interference more than 77 percent of the time and total outage of service 74 percent of the time, rendering Starlink unusable for most Americans,” wrote SpaceX senior director David Goldman. 

Dish has asked the FCC to allow it to use the 12Ghz band for a terrestrial 5G network, despite potential satellite interference with Starlink and other services, including its own Dish Network. Dish and its allies in the 5Gfor12GHz coalition recently published research saying that doing so would be “highly feasible” and that Starlink and similar services “will experience zero harmful interference with 5G.”

However, SpaceX called the analysis “faulty” and told the FCC that “no reasonable engineer” would believe the studies. “SpaceX urges the Commission to investigate whether Dish and [Dell-owned] RS Access filed intentionally misleading reports,” it said. The Elon Musk-owned company also pointed out that the studies don’t align with Dish’s own filings from December 2019 that “concurrent sharing of spectrum… is not viable in the 12 GHz band.” 

Dish said that its “expert engineers are evaluating SpaceX’s claims in the filing,” in a statement to CNN Business, but there’s no comment yet from the FCC. Previously, FCC chair Jessica Rosenworcel called the case “one of the most complex dockets we have… it’s going to take a lot of technical work to make sure that the airwaves can accommodate all those different uses without harmful interference.”

Spectrum battles have been waged frequently over the last several years, with one of the most recent being over potential 5G interference with aviation usage. Recent studies have found that countries exploiting spectrum have significantly expanded their economies compared to other nations. 

Nothing Phone 1 pre-order reservations start today

You can finally put money toward the Nothing Phone 1 — provided you can join an exclusive club. Nothing has opened pre-order reservations for its first smartphone using an invitation code system. Private community members go first, and will have 48 hours to use their code, place a £20 (roughly $25) non-refundable deposit and secure an order opportunity on July 12th. Everyone else can sign up for a waiting list that will deliver invitations in batches.

If you do go ahead with an order, Nothing will deduct the deposit from the purchase and supply a further £20 credit to use toward either a Phone 1 accessory or Ear 1 earbuds. The company hasn’t yet revealed the price of the phone itself. As Nothing warned earlier, the Phone 1 won’t officially come to North America outside of a closed beta for a handful of private community investors. The device should work, but won’t have full support.

If the pre-order strategy sounds familiar, it should. Nothing founder Carl Pei’s former outfit OnePlus used an invitation system for years. The effect may be similar. Invitation-based orders help manage tight supply (by controlling sales and improving demand estimates) while creating a cachet that might spur demand. It’s not clear when you’ll get to order a Phone 1 on a whim, but don’t be surprised if you end up waiting awhile.

Nothing’s Carl Pei thinks everyone else’s smartphones are boring

Carl Pei thinks there’s something wrong with the smartphone industry. That’s not to say the handsets on sale today are bad. Across the board, modern mobiles are faster, more sophisticated and take better photos than previous generations. But like a gro…