Back in 2017, the European Union took the shockingly rational step of largely eliminating roaming charges for its citizens travelling among member nations, dubbing it the “Roam-like-at-home” system. Operating across the 27 countries that make up the European Economic Area as well as Iceland, Liechtenstein and Norway — but not the UK because Brexit — Roam-like-at-home was set to expire at the end of June. On Thursday, however, the European Commission announced that it will be extending the system for another decade, through 2032.
The EC cites benefits to both consumers and telecom providers as part of its decision, with consumers enjoying “a better roaming experience, with the same quality of mobile service abroad as they have at home,” as well as improved access to emergency services and increased transparency in charging rates so travellers in the EU won’t find a massive bill waiting for them when they get home.
“Remember when we had to switch off mobile data when travelling in Europe — to avoid ending up with a massive roaming bill?” Thierry Breton, Commissioner for the Internal Market, said in Thursday’s press statement. “Well this is history. And we intend to keep it this way for at least the next 10 years. Better speed, more transparency: we keep improving EU citizens’ lives.”
The extended rules strongly suggests that carriers “ensure that consumers have access to use 4G, or the more advanced 5G, networks, if these are available at the destination” and “automatically interrupt mobile services if the mobile services over non-terrestrial networks reach charges of €50 or another predefined limit.” What’s more, they require 112 to dial emergency services be made available across the entire economic area and, by June 2023, for carriers to notify travellers of that ability either by text or popup when they enter the EU.
Most importantly, the new rules will put a couple Euros back in consumers’ pockets because the EU is run by rational adults who can negotiate with telecom carriers for better wholesale data and voice pricing without the entire process devolving into a constitutional crisis. Users can expect to pay 2 €/GB this year with that rate steadily dropping to 1 €/GB from 2027 on, 0.022 €/min for voice until 2025 when it will drop to 0.019 €/min, and each SMS from here on out will cost 0.004 € until 2025 when it nudges down to 0.003 €.
In yet another historic reversal of long standing precedent, the US Supreme Court on Thursday ruled 6 – 3 along ideological lines to severely limit the authority of the Environmental Protection Agency in regulating carbon emissions from power plants, further hamstringing the Biden administration’s ability to combat global warming.
The case, West Virginia v. Environmental Protection Agency, No. 20-1530, centered both on whether the Clean Air Act gives the EPA the power to issue regulations for the power industry and whether Congress must “speak with particular clarity when it authorizes executive agencies to address major political and economic questions,” a theory the court refers to as the “major questions doctrine.”
In short, the court holds that only Congress, not the EPA, has the power to regulate emissions. “Capping carbon dioxide emissions at a level that will force a nationwide transition away from the use of coal to generate electricity may be a sensible solution to the crisis of the day,” Chief Justice Roberts wrote in the majority opinion. “But it is not plausible that Congress gave EPA the authority to adopt on its own such a regulatory scheme… A decision of such magnitude and consequence rests with Congress itself, or an agency acting pursuant to a clear delegation from that representative body.”
“Hard on the heels of snatching away fundamental liberties, the right-wing activist court just curtailed vital climate action,” Jason Rylander, an attorney at the Center for Biological Diversity’s Climate Law Institute, responded in a press statement Thursday. “It’s a bad decision and an unnecessary one, but the EPA can still limit greenhouse gases at the source under Section 111 and more broadly through other Clean Air Act provisions. In the wake of this ruling, EPA must use its remaining authority to the fullest.”
The EPA case grew out of the Trump administration’s efforts to relax carbon emission regulations from power plants, what it called the Affordable Clean Energy Rule, arguing that the Clean Air Act limited the EPA’s authority to enact measures “that can be put into operation at a building, structure, facility or installation.” A divided three-judge appeals court struck down the rule on Trump’s last full day as president, noting that it was based on a “fundamental misconstruction” of the CAA and gleaned only through a “tortured series of misreadings.”
Had it gone into effect, the Affordable Clean Energy Rule would have replaced the Obama administration’s Clean Power Plan of 2015, which would have forced the energy industry further away from coal power. The CPP never went into effect as the Supreme Court also blocked that in 2016, deciding that individual states didn’t have to adhere to the rule until the EPA fielded a litany of frivolous lawsuits from conservative states and the coal industry (the single-circle Venn diagram of which being West Virginia).
“The E.P.A. has ample discretion in carrying out its mandate,” the appeals court stated. “But it may not shirk its responsibility by imagining new limitations that the plain language of the statute does not clearly require.”
This decision doesn’t just impact the EPA’s ability to do its job, from limiting emissions from specific power plants to operating the existing cap-and-trade carbon offset policy, it also hints at what other regressive steps the court’s conservative majority may be planning to take. During the pandemic, the court already blocked eviction moratoriums enacted by the CDC and told OSHA that it couldn’t mandate vaccination requirements for large companies. More recently, the court declared states incapable of regulating their own gun laws but absolutely good-to-go on regulating women’s bodily autonomy, gutted our Miranda Rights, and further stripped Native American tribes of their sovereignty.
“Today, the court strips the Environmental Protection Agency (EPA) of the power Congress gave it to respond to the most pressing environmental challenge of our time,” Justice Elena Kagan wrote in the minority. Kagan was joined by Justices Stephen Breyer and Sonia Sotomayor in her dissent.
Last year, hurricanes hammered the Southern and Eastern US coasts at the cost of more than 160 lives and $70 billion in damages. Thanks to climate change, it’s only going to get worse. In order to quickly and accurately predict these increasingly severe weather patterns, the National Oceanic and Atmospheric Administration (NOAA) announced Tuesday that it has effectively tripled its supercomputing (and therefore weather modelling) capacity with the addition of two high-performance computing (HPC) systems built by General Dynamics.
“This is a big day for NOAA and the state of weather forecasting,” Ken Graham, director of NOAA’s National Weather Service, said in a press statement. “Researchers are developing new ensemble-based forecast models at record speed, and now we have the computing power needed to implement many of these substantial advancements to improve weather and climate prediction.”
General Dynamics was awarded the $505 million contract back in 2020 and delivered the two computers, dubbed Dogwood and Cactus, to their respective locations in Manassas, Virginia, and Phoenix, Arizona. They’ll replace a pair of older Cray and IBM systems in Reston, Virginia, and Orlando, Florida.
Each HPC operates at 12.1 petaflops or, “a quadrillion calculations per second with 26 petabytes of storage,” Dave Michaud, Director, National Weather Service Office of Central Processing, said during a press call Tuesday morning. That’s “three times the computing capacity and double the storage capacity compared to our previous systems… These systems are amongst the fastest in the world today, currently ranked at number 49 and 50.” Combined with its other supercomputers in West Virginia, Tennessee, Mississippi and Colorado, the NOAA wields a full 42 petaflops of capacity.
With this extra computational horsepower, the NOAA will be able to create higher-resolution models with more realistic physics — and generate more of them with a higher degree of model certainty, Brian Gross, Director, NOAA’s Environmental Modeling Center, explained during the call. This should result in more accurate forecasts and longer lead times for storm warnings.
“The new supercomputers will also allow significant upgrades to specific modeling systems in the coming years,” Gross said. “This includes a new hurricane forecast model named the Hurricane Analysis and Forecast System, which is slated to be in operation at the start of the 2023 hurricane season,” and will replace the existing H4 hurricane weather research and forecasting model.
While the NOAA hasn’t yet confirmed in absolute terms how much of an improvement the new supercomputers will grant to the agency’s weather modelling efforts, Ken Graham, the Director of National Weather Service, is convinced of their value.
“To translate what these new supercomputers will mean for for the average American,” he said during the press call, “we are currently developing models that will be able to provide additional lead time in the outbreak of severe weather events and more accurately track the intensity forecasts for hurricanes, both in the ocean and that are expected to hit landfall, and we want to have longer lead times [before they do].”
When Rivian drivers do eventually get on the road, they’ll have their pick of charging networks including a brand new one from the EV truckmaker itself. Rivian announced on Monday that the first three sites of its burgeoning “Adventure Network” of Leve…
The skies overhead could soon be filled with constellations of commercial space stations occupying low earth orbit while human colonists settle the Moon with an eye on Mars, if today’s robber barons have their way. But this won’t result in the same freewheeling Wild West that we saw in the 19th century, unfortunately, as tomorrow’s interplanetary settlers will be bringing their lawyers with them.
In their new book, The End of Astronauts: Why Robots Are the Future of Exploration, renowned astrophysicist and science editor, Donald Goldsmith, and Martin Rees, the UK’s Astronomer Royal, argue in favor of sending robotic scouts — with their lack of weighty necessities like life support systems — out into the void ahead of human explorers. But what happens after these synthetic astronauts discover an exploitable resource or some rich dork declares himself Emperor of Mars? In the excerpt below, Goldsmith and Rees discuss the challenges facing our emerging exoplanetary legal system.
Almost all legal systems have grown organically, the result of long experience that comes from changes in the political, cultural, environmental, and other circumstances of a society. The first sprouts of space law deserve attention from those who may participate in the myriad activities envisioned for the coming decades, as well, perhaps, from those who care to imagine how a Justinian law code could arise in the realm of space.
Those who travel on spacecraft, and to some degree those who will live on another celestial object, occupy situations analogous to those aboard naval vessels, whose laws over precedents to deal with crimes or extreme antisocial behavior. These laws typically assign to a single officer or group of officers the power to judge and to inflict punishment, possibly awaiting review in the event of a return to a higher court. This model seems likely to reappear in the first long-distance journeys within the solar system and in the first settlements on other celestial objects, before the usual structure of court systems for larger societies appears on the scene.
As on Earth, however, most law is civil law, not criminal law. A far greater challenge than dealing with criminal acts lies in formulating an appropriate code of civil law that will apply to disputes, whether national or international, arising from spaceborne activities by nations, corporations, or individuals. For half a century, a small cadre of interested parties have developed the new specialty of “space law,” some of which already has the potential for immediate application. What happens if a piece of space debris launched by a particular country or corporation falls onto an unsuspecting group of people or onto their property? What happens if astronauts from different countries lay claim to parts of the moon or an asteroid? And most important in its potential importance, if not in its likelihood: who will speak for Earth if we should receive a message from another civilization?
Conferences on subjects such as these have generated more interest than answers. Human exploration of the moon brought related topics to more widespread attention and argument. During the 1980s, the United Nations seemed the natural arena in which to hash them out, and those discussions eventually produced the outcomes described in this chapter. Today, one suspects, almost no one knows the documents that the United Nations produced, let alone has plans to support countries that obey the guidelines in those documents.
Our hopes for achieving a rational means to define and limit activities beyond our home planet will require more extensive agreements, plus a means of enforcing them. Non-lawyers who read existing and proposed agreements about the use of space should remain aware that lawyers typically define words relating to specialized situations as “terms of art,” giving them meanings other than those that a plain reading would suggest.
For example, the word “recovery” in normal discourse refers to regaining the value of something that has been lost, such as the lost wages that arise from an injury. In more specialized usage, “resource recovery” refers to the act of recycling material that would otherwise go to waste. In the vocabulary of mining operations, however, “recovery” has nothing to do with losing what was once possessed; instead, it refers to the extraction of ore from the ground or the seabed. The word’s gentle nature contrasts with the more accurate term “exploitation,” which often implies disapproval, though in legal matters it often carries only a neutral meaning. For example, in 1982 the United Nations Convention on the Law of the Sea established an International Seabed Authority (ISA) to set rules for the large portion of the seabed that lies beyond the jurisdiction of any nation. By now, 168 countries have signed on to the convention, but the United States has not. According to the ISA’s website, its Mining Code “refers to the whole of the comprehensive set of rules, regulations and procedures issued by ISA to regulate prospecting, exploration and exploitation of marine minerals in the international seabed Area.” In mining circles, no one blinks at plans to exploit a particular location by extracting its mineral resources. Discussions of space law, however, tend to avoid the term “exploitation” in favor of “recovery.”
There are a million and one ways to die in space, whether it’s from micrometeoroid impacts shredding your ship or solar flares frying its electronics, drowning in your own sweat during a spacewalk or having a cracked coworker push you out an airlock. …
The Metaverse, as Meta CEO Mark Zuckerberg envisions it, will be a fully immersive virtual experience that rivals reality, at least from the waist up. But the visuals are only part of the overall Metaverse experience.
“Getting spatial audio right is key to delivering a realistic sense of presence in the metaverse,” Zuckerberg wrote in a Friday blog post. “If you’re at a concert, or just talking with friends around a virtual table, a realistic sense of where sound is coming from makes you feel like you’re actually there.”
That concert, the blog post notes, will sound very different if performed in a full-sized concert hall than in a middle school auditorium on account of the differences between their physical spaces and acoustics. As such, Meta’s AI and Reality Lab (MAIR, formerly FAIR) is collaborating with researchers from UT Austin to develop a trio of open source audio “understanding tasks” that will help developers build more immersive AR and VR experiences with more lifelike audio.
The first is MAIR’s Visual Acoustic Matching model, which can adapt a sample audio clip to any given environment using just a picture of the space. Want to hear what the NY Philharmonic would sound like inside San Francisco’s Boom Boom Room? Now you can. Previous simulation models were able to recreate a room’s acoustics based on its layout — but only if the precise geometry and material properties were already known — or from audio sampled within the space, neither of which produced particularly accurate results.
MAIR’s solution is the Visual Acoustic Matching model, called AViTAR, which “learns acoustic matching from in-the-wild web videos, despite their lack of acoustically mismatched audio and unlabeled data,” according to the post.
“One future use case we are interested in involves reliving past memories,” Zuckerberg wrote, betting on nostalgia. “Imagine being able to put on a pair of AR glasses and see an object with the option to play a memory associated with it, such as picking up a tutu and seeing a hologram of your child’s ballet recital. The audio strips away reverberation and makes the memory sound just like the time you experienced it, sitting in your exact seat in the audience.”
MAIR’s Visually-Informed Dereverberation mode (VIDA), on the other hand, will strip the echoey effect from playing an instrument in a large, open space like a subway station or cathedral. You’ll hear just the violin, not the reverberation of it bouncing off distant surfaces. Specifically, it “learns to remove reverberation based on both the observed sounds and the visual stream, which reveals cues about room geometry, materials, and speaker locations,” the post explained. This technology could be used to more effectively isolate vocals and spoken commands, making them easier for both humans and machines to understand.
VisualVoice does the same as VIDA but for voices. It uses both visual and audio cues to learn how to separate voices from background noises during its self-supervised training sessions. Meta anticipates this model getting a lot of work in the machine understanding applications and to improve accessibility. Think, more accurate subtitles, Siri understanding your request even when the room isn’t dead silent or having the acoustics in a virtual chat room shift as people speaking move around the digital room. Again, just ignore the lack of legs.
“We envision a future where people can put on AR glasses and relive a holographic memory that looks and sounds the exact way they experienced it from their vantage point, or feel immersed by not just the graphics but also the sounds as they play games in a virtual world,” Zuckerberg wrote, noting that AViTAR and VIDA can only apply their tasks to the one picture they were trained for and will need a lot more development before public release. “These models are bringing us even closer to the multimodal, immersive experiences we want to build in the future.”
Toyota’s US launch of the unpronounceable bZ4X EV is off to a rough start with the automaker announcing on Thursday a broad recall of the vehicle barely two months after its debut, due to a potentially deadly situation that could lead to the vehicle’s wheels separating while driving at speed.
Some 2,700 of the electric crossovers are subject to the recall — 2,000 destined for the European market, 260 to the US, 110 to Japan and 20 to Canada. The company implores owners to park their vehicles immediately and not resume driving them until a more “permanent” solution can be devised.
“No one should drive these vehicles until the remedy is performed,” Toyota said in the Thursday notice. “After low-mileage use, all of the hub bolts on the wheel can loosen to the point where the wheel can detach from the vehicle. If a wheel detaches from the vehicle while driving, it could result in a loss of vehicle control, increasing the risk of a crash. The cause of the issue and the driving patterns under which this issue could occur are still under investigation.”
Subaru has issued a similar recall for about 2,600 Solterra EVs. These EVs are functionally identical to the bZ4X and are produced on the same lines at Toyota’s Motomachi facility. There’s no word yet on when Toyota engineers might have a solution for the issue.
More than a million new titles are published annually in the US, far more than even the most bibliophilic secret agent could get through. Even with a weekly publishing schedule, we can only bring you 52 Hitting the Books each year. To help shine a spot…
As distressing a prospect it may sound, our world did exist before social media. Those were some interesting times with nary a poorly lit portion of Cheesecake Factory fare to critique, exactly zero epic fails to laugh at and not one adorable paw bean available for ogling. There weren’t even daily main characters! We lived as low-bandwidth savages, huddled around the soft glow of CRT monitors and our cackling, crackling signal modulators, blissfully unaware of the societal upheaval this newfangled internet would bring about.
In his new book, The Modem World: A Prehistory of Social Media, author and Assistant Professor in the Department of Media Studies at the University of Virginia, Kevin Driscoll examines the halcyon days of the early internet — before even AOL Online — when BBS was king, WiFi wasn’t even yet a notion, and the speed of electronic thought topped out at 300 baud.
Early on, the heartbeat of the modem world pulsed at a steady 300 bits per second. Streams of binary digits flowed through the telephone network in 7- and 8-bit chunks, or “bytes,” and each byte corresponded to a single character of text. The typical home computer, hooked up to a fuzzy CRT monitor, could display only about a thousand characters at once, organized into forty columns and twenty-four rows. At 300 bits per second, or 300 “baud,” filling the entire screen took approximately thirty seconds. The text appeared faster than if someone were typing in real time, but it was hardly instantaneous.
In the late 1970s, the speed at which data moved through dial-up networks followed a specification published by Ma Bell nearly two decades before. Created in the early 1960s, the AT&T Data-Phone system introduced a reliable technique for two-way, machine-to-machine communication over consumer-grade telephone lines. Although Data-Phone was initially sold to large firms to facilitate communication between various offices and a single data-processing center, it soon became a de facto standard for commercial time-sharing services, online databases, and amateur telecom projects. In 1976, Lee Felsenstein of the People’s Computer Company designed a DIY modem kit offering compatibility with the AT&T system for under $100. And as newer tech firms like Hayes Microcomputer Products in Atlanta and US Robotics in Chicago began to sell modems for the home computer market, they assured consumers of their compatibility with the “Bell 103” standard. Rather than compete on speed, these companies sold hobbyist consumers on “smart” features like auto-answer, auto-dial, and programmable “remote control” modes. A 1980 ad for the US Robotics Phone Link Acoustic Modem emphasized its warranty, diagnostic features, and high-end aesthetics: “Sleek… Quiet… Reliable.”
To survive, early PC modem makers had to sell more than modems.
They had to sell the value of getting online at all. Today, networking is central to the experience of personal computing — can you imagine a laptop without WiFi? — but in the late 1970s, computer owners did not yet see their machines as communication devices. Against this conventional view, upstart modem makers pitched their products as gateways to a fundamentally different form of computing. Like the home computer itself, modems were sold as transformative technologies, consumer electronics with the potential to change your life. Novation, the first mover in this rhetorical game, promised that its iconic black modem, the Cat, would “tie you into the world.” Hayes soon adopted similar language, describing the Micromodem II as a boundary-breaking technology that would “open your Apple II to the outside world.” Never mind that these “worlds” did not yet exist in 1979. Modem marketing conjured a desirable vision of the near future, specially crafted for computer enthusiasts. Instead of driving to an office park or riding the train, modem owners would be the first truly autonomous information workers: telecommuting to meetings, dialing into remote databases, and swapping files with other “computer people” around the globe. According to Novation, the potential uses for a modem like the Cat were “endless.”
In practice, 300 bits per second did not seem slow. In fact, the range of online services available to microcomputer owners in 1980 was rather astonishing, given their tiny numbers. A Bell-compatible modem like the Pennywhistle or Novation Cat offered access to searchable databases such as Dialog and Dow Jones, as well as communication services like CompuServe and The Source. Despite the hype, microcomputers alone could sometimes seem underwhelming to a public primed by visions of all-powerful, superhuman “world brains.” Yet, as one Byte contributor recounted, the experience of using an online “information retrieval” service felt like consulting an electronic oracle. The oracle accepted queries on virtually any topic — “from aardvarks to zymurgy” — and the answers seemed instantaneous. “What’s your time worth?” asked another Byte writer, comparing the breadth and speed of an online database to a “well- stocked public library.” Furthermore, exploring electronic databases was fun. A representative for Dialog likened searching its system to going on an “adventure” and joked that it was “much less frustrating” than the computer game of the same name. Indeed, many early modem owners came to believe that online information retrieval would be the killer app propelling computer ownership into the mainstream.
Yet it was not access to other machines but access to other people that ultimately drove the adoption of telephone modems among micro- computer owners. Just as email sustained a feeling of community among ARPANET researchers and time-sharing brought thousands of Minnesota teachers and students into collaboration, dial-up modems helped to catalyze a growing network of microcomputer enthusiasts. Whereas users of time-sharing networks tended to access a central computer through a “dumb” terminal, users of microcomputer networks were of- ten themselves typing on a microcomputer. In other words, there was a symmetry between the users and hosts of microcomputer networks. The same apparatus — a microcomputer and modem — used to dial into a BBS could be repurposed to host one. Microcomputers were more expensive than simple terminals, but they were much cheaper than the minicomputers deployed in contemporary time-sharing environments.
Like many fans and enthusiasts, computer hobbyists were eager to connect with others who shared their passion for hands-on technology. News and information about telephone networking spread through the preexisting network of regional computer clubs, fairs, newsletters, and magazines. At the outset of 1979, a first wave of modem owners was meeting on bulletin board systems like CBBS in Chicago and ABBS in San Diego to talk about their hobby. In a 1981 article for InfoWorld, Craig Vaughan, creator of ABBS, characterized these early years as an awakening: “Suddenly, everyone was talking about modems, what they had read on such and such a bulletin board, or which of the alternatives to Ma Bell… was most reliable for long-distance data communication.” By 1982, hundreds of BBSs were operating throughout North America, and the topics of discussion were growing beyond the computing hobby itself. Comparing the participatory culture of BBSs to amateur radio, Vaughan argued that modems transformed the computer from a business tool to a medium for personal expression. Sluggish connection speeds did not slow the spread of the modem world.
True to the original metaphor of the “computerized bulletin board,” all early BBSs provided two core functions: read old messages or post a new message. At this protean stage, the distinction between “files” and “messages” could be rather fuzzy. In a 1983 how-to book for BBS software developers, Lary Myers described three types of files accessible to users: messages, bulletins, and downloads. While all three were stored and transmitted as sequences of ASCII characters, Myers distinguished “the message file” as the defining feature of the BBS. Available day and night, the message file provided an “electronic corkboard” to the community of callers: a place to post announcements, queries, or comments “for the good of all.” Myers’s example routine, written in BASIC, identified each message by a unique number and stored all of the messages on the system in a single random-access file. A comment in Myers’s code suggested that eighty messages would be a reasonable maximum for systems running on a TRS-80. A caller to such a system requested messages by typing numbers on their keyboard, and the system retrieved the corresponding sequence of characters from the message file. New messages were appended to the end of the message file, and when the maximum number of messages was reached, the system simply wrote over the old ones. Like flyers on a corkboard, messages on a BBS were not expected to stay up forever.