Facebook is experimenting with a new audio feature for Groups. The company is testing audio channels, which will bring Discord-like voice chats to Groups, Facebook shared in a blog post.Facebook has already had audio features for Groups with rooms. But…
Return to Monkey Island’s first gameplay trailer is a swashbuckling trip of nostalgia
When Return to Monkey Island arrives later this year, players will finally discover the secret of Monkey Island. That’s the pitch series creator Ron Gilbert made in the game’s newest trailer, which premiered today during Nintendo’s latest Direct s…
Both of Valve’s classic Portal games arrive on the Switch today
A few months ago, Valve announced that both of its excellent Portal games were coming to the Nintendo Switch, but we didn’t know when. Today’s Nintendo Direct presentation cleared that up. Portal Companion Collection will arrive on the Switch later today for $19.99. The collection includes both the original Portal from 2007 as well as the more expansive, story-driven Portal 2 from 2011. Whether you missed these games the first time out or just want to replay a pair of classics, this collection sounds like a good way to return to one of the most intriguing worlds Valve ever created.
While the original Portal was strictly a single-player experience, Portal 2 has a split-screen co-op experience; you can also pay this mode with a friend online as well. And while these games originated on the PC, Valve also released Portal 2 for the PlayStation 3 — and if I recall, the game’s controls mapped to a controller very well. Given that the Portal series is more puzzle-based than traditional first-person games, you shouldn’t have any problems navigating the world with a pair of Joy-Con controllers.
11 | 7月 | 2020 | 蓮田健コラム
6月30日に小学館から出版された『赤ちゃんポストの真実』を巡っては、数ヶ月間ゴタゴタしました。 きっかけは4月に著者から送られてきた手紙です。 6月に本を出版する旨の内容が書いてありました。 「こうのとりのゆりかご」関連の本は過去に何冊か出版されましたが、通常は事前に企画が説明され、取材や原稿チェックが…
京都新聞HDの違法報酬 同社記者が元相談役らを刑事告発へ | 毎日新聞
京都新聞社を傘下に持つ京都新聞ホールディングス(HD)=京都市中京区=が、大株主の元相談役に長年支払った報酬など総額約19億円が違法支出に当たると第三者委員会から指摘された問題で、京都新聞の記者数人が、この元相談役と支出に関与した役員らを、会社法違反(利益供与)の疑いで京都地検に告発することが、関係…
欧州の主要3都市の市長が会談したのは「偽物のキーウ市長」だった ロシアの陰謀なのか? | ハイブリッドな戦争の「心理兵器」か
2022年6月20日の週、欧州の主要な3つの都市の市長が立て続けに、自らの信用を損ない、嘲笑されるような事態に巻き込まれてしまった。 市長たちはそれぞれ、ウクライナの首都キーウのビタリ・クリチコ市長とビデオ通話による会談を行ったが、実はこのクリチコは偽物だったのだ。 疑わしい点はいくつかあったが…… ドイツの…
‘GoldenEra’ is a loving, if muddled, tribute to ‘GoldenEye 007’
GoldenEye 007 for the Nintendo 64 is one of those games that will forever be held up as a milestone in the art. It wasn’t the first FPS on a console, or even the first FPS on the Nintendo 64, but it was unquestionably the best. And the most influential…
Meta’s latest auditory AIs promise a more immersive AR/VR experience
The Metaverse, as Meta CEO Mark Zuckerberg envisions it, will be a fully immersive virtual experience that rivals reality, at least from the waist up. But the visuals are only part of the overall Metaverse experience.
“Getting spatial audio right is key to delivering a realistic sense of presence in the metaverse,” Zuckerberg wrote in a Friday blog post. “If you’re at a concert, or just talking with friends around a virtual table, a realistic sense of where sound is coming from makes you feel like you’re actually there.”
That concert, the blog post notes, will sound very different if performed in a full-sized concert hall than in a middle school auditorium on account of the differences between their physical spaces and acoustics. As such, Meta’s AI and Reality Lab (MAIR, formerly FAIR) is collaborating with researchers from UT Austin to develop a trio of open source audio “understanding tasks” that will help developers build more immersive AR and VR experiences with more lifelike audio.
The first is MAIR’s Visual Acoustic Matching model, which can adapt a sample audio clip to any given environment using just a picture of the space. Want to hear what the NY Philharmonic would sound like inside San Francisco’s Boom Boom Room? Now you can. Previous simulation models were able to recreate a room’s acoustics based on its layout — but only if the precise geometry and material properties were already known — or from audio sampled within the space, neither of which produced particularly accurate results.
MAIR’s solution is the Visual Acoustic Matching model, called AViTAR, which “learns acoustic matching from in-the-wild web videos, despite their lack of acoustically mismatched audio and unlabeled data,” according to the post.
“One future use case we are interested in involves reliving past memories,” Zuckerberg wrote, betting on nostalgia. “Imagine being able to put on a pair of AR glasses and see an object with the option to play a memory associated with it, such as picking up a tutu and seeing a hologram of your child’s ballet recital. The audio strips away reverberation and makes the memory sound just like the time you experienced it, sitting in your exact seat in the audience.”
MAIR’s Visually-Informed Dereverberation mode (VIDA), on the other hand, will strip the echoey effect from playing an instrument in a large, open space like a subway station or cathedral. You’ll hear just the violin, not the reverberation of it bouncing off distant surfaces. Specifically, it “learns to remove reverberation based on both the observed sounds and the visual stream, which reveals cues about room geometry, materials, and speaker locations,” the post explained. This technology could be used to more effectively isolate vocals and spoken commands, making them easier for both humans and machines to understand.
VisualVoice does the same as VIDA but for voices. It uses both visual and audio cues to learn how to separate voices from background noises during its self-supervised training sessions. Meta anticipates this model getting a lot of work in the machine understanding applications and to improve accessibility. Think, more accurate subtitles, Siri understanding your request even when the room isn’t dead silent or having the acoustics in a virtual chat room shift as people speaking move around the digital room. Again, just ignore the lack of legs.
“We envision a future where people can put on AR glasses and relive a holographic memory that looks and sounds the exact way they experienced it from their vantage point, or feel immersed by not just the graphics but also the sounds as they play games in a virtual world,” Zuckerberg wrote, noting that AViTAR and VIDA can only apply their tasks to the one picture they were trained for and will need a lot more development before public release. “These models are bringing us even closer to the multimodal, immersive experiences we want to build in the future.”
Codemasters breaks down how it made the cars in ‘F1 22’ sound like the real thing
EA’s Codemasters is making F1 2022 audio more realistic with an improved driver modes plus updates that make broadcast and car sounds more authentic, it revealed in its latest Developer Deep Dive video. It also unveiled its first licensed soundtrack with 33 songs from artists like Charli XCX, Hozier and Marshmellow.
This year Formula 1 introduced all new cars that rely on floor tunnels to generate downforce and allow for tighter racing, along with all-new engines and more. F1 2022 is on top of those changes not just with the physics and visuals but also the sounds. To that end, the game has introduced all-new engine bundles based on the real vehicle sounds to give you the feeling of sitting in real Red Bull, Ferrari, Mercedes and other Formula 1 vehicles.
“In a game like F1 22 the cars are the star so we want them to sound as authentic as possible. We record the actual cars every season and it’s important that we recreate the authenticity of the engines,” said audio director Brad Porter. “Players use the sound of the engine to drive the car so it’s important to get that across as accurately as possible.”
That also includes touches like recording audio using the real headsets from team race engineers and simulating how things would sound to a driver inside a helmet. The developers also used mics that are very close to what announcers use in order accurately simulate the broadcast audio.
That allowed the team to enhance the different sound modes available, including both Driver mode and Broadcast mode. The latter mode is designed to sound as close as possible to what you’d here on TV, Porter explained. It also enhanced the Cinematic mode to make it “larger than life” with “bespoke” touches like enhanced engine sounds, crowd noise and more. They’ve also added new settings to let players play with the mix of sounds more than ever.
Other new touches include the addition of Natalie Pinkham as a co-commentator, new recordings of all the announcers and authentic sounds from pit lane, garage and paddocks. Another big change is the addition of licensed music like you’ll find in other EA games, letting players choose between 33 songs from artists ranging from Charli XCX to Deadmaus to Diplo. “It is an accelerative soundtrack experience, designed to strap the player into the cockpit and driven by the unrivalled energy of the new era of Formula 1,” the development team said.
Twitter brings its closed caption toggle to Android and iOS
Twitter is giving you the power to switch closed captions on or off on your mobile device. The social network has started rolling out a closed caption toggle to everyone on Android and iOS, a couple of months after it started testing the feature. So long as a video posted on the platform has available subtitles, you’ll see a CC button at its top right portion — simply tap it to turn subtitles off or on.
It’s a great addition for accessibility purposes, seeing as it allows you to show captions whenever you want. In the past, you’ll only see the CC button on the web and for subtitles on mobile if your sound is turned off. Further, captions automatically disappear when you expand a video, since doing so enables sound playback. A few years ago, you even had to go to accessibility settings to switch on closed captioning if you want to see subtitles for your videos at all. That said, the feature does have a limitation: The button will only show up for a video if a caption has been provided for it.
The choice is now yours: the closed caption toggle is now available for everyone on iOS and Android!
Tap the “CC” button on videos with available captions to turn the captions off/on. https://t.co/GceKv68wvi
— Twitter Support (@TwitterSupport) June 23, 2022
Twitter introduced automatically generated captions for videos back in December, which is unrelated to this particular feature, according to a spokesperson who talked to The Verge. They will, however, only show up on muted videos unless you choose the option to see them at all times through the website’s accessibility settings page. There’s also no way to report inaccurate automated captions at the moment.