Category

Technology

Category


When asked what percent of Shorts ad revenue will be in this creator pool, a YouTube spokesperson did not address the question. When pressed in a follow-up email, the spokesperson replied, “we don’t have further detail to share beyond what’s here,” referring to a blog post by YouTube announcing the new plan, which also does not disclose that information.

In short, it’s hard to gauge how transformative YouTube’s offer is—and how enticing it will be for TikTokkers.

To be sure, the company’s move will almost certainly still be an improvement over what has long been offered to creators of Shorts—and what is currently offered to TikTokkers. In recent years, the two platforms have used the same payment model: creator funds, which are detached from ad revenue and are static vats of money provided by the platform to be distributed among people making especially engaging content. But as a platform grows, that amount doesn’t necessarily keep pace—even as more eyes are watching, and more new creators are claiming a slice of the pie. That means that as a platform like TikTok prospers, the creators fueling that rise actually earn less. (Representatives of TikTok did not respond to a request to comment from MIT Technology Review.)



Source link


Jay Gierak at Ghost, which is based in Mountain View, California, is impressed by Wayve’s demonstrations and agrees with the company’s overall viewpoint. “The robotics approach is not the right way to do this,” says Gierak.

But he’s not sold on Wayve’s total commitment to deep learning. Instead of a single large model, Ghost trains many hundreds of smaller models, each with a specialism. It then hand codes simple rules that tell the self-driving system which models to use in which situations. (Ghost’s approach is similar to that taken by another AV2.0 firm, Autobrains, based in Israel. But Autobrains uses yet another layer of neural networks to learn the rules.)

According to Volkmar Uhlig, Ghost’s co-founder and CTO, splitting the AI into many smaller pieces, each with specific functions, makes it easier to establish that an autonomous vehicle is safe. “At some point, something will happen,” he says. “And a judge will ask you to point to the code that says: ‘If there’s a person in front of you, you have to brake.’ That piece of code needs to exist.” The code can still be learned, but in a large model like Wayve’s it would be hard to find, says Uhlig.

Still, the two companies are chasing complementary goals: Ghost wants to make consumer vehicles that can drive themselves on freeways; Wayve wants to be the first company to put driverless cars in 100 cities. Wayve is now working with UK grocery giants Asda and Ocado, collecting data from their urban delivery vehicles.

Yet, by many measures, both firms are far behind the market leaders. Cruise and Waymo have racked up hundreds of hours of driving without a human in their cars and already offer robotaxi services to the public in a small number of locations.

“I don’t want to diminish the scale of the challenge ahead of us,” says Hawke. “The AV industry teaches you humility.”



Source link


Just minutes after Putin announced conscription, the administrators of the anti-Kremlin Rospartizan group announced its own “mobilization,” gearing up its supporters to bomb military enlistment officers and the Ministry of Defense with Molotov cocktails. “Ordinary Russians are invited to die for nothing in a foreign land,” they wrote. “Agitate, incite, spread the truth, but do not be the ones who legitimize the Russian government.”

The Rospartizan Telegram group—which has more than 28,000 subscribers—has posted photos and videos purporting to show early action against the military mobilization, including burned-out offices and broken windows at local government buildings. 

Other Telegram channels are offering citizens opportunities for less direct, though far more self-interested, action—namely, how to flee the country even as the government has instituted a nationwide ban on selling plane tickets to men aged 18 to 65. Groups advising Russians on how to escape into neighboring countries sprung up almost as soon as Putin finished talking, and some groups already on the platform adjusted their message. 

One group, which offers advice and tips on how to cross from Russia to Georgia, is rapidly closing in on 100,000 members. The group dates back to at least November 2020, according to previously pinned messages; since then, it has offered information for potential travelers about how to book spots on minibuses crossing the border and how to travel with pets. 

After Putin’s declaration, the channel was co-opted by young men giving supposed firsthand accounts of crossing the border this week. Users are sharing their age, when and where they crossed the border, and what resistance they encountered from border guards, if any. 

For those who haven’t decided to escape Russia, there are still other messages about how to duck army call-ups. Another channel, set up shortly after Putin’s conscription drive, crowdsources information about where police and other authorities in Moscow are signing up men of military age. It gained 52,000 subscribers in just two days, and they are keeping track of photos, videos, and maps showing where people are being handed conscription orders. The group is one of many: another Moscow-based Telegram channel doing the same thing has more than 115,000 subscribers. Half that audience joined in 18 hours overnight on September 22. 

“You will not see many calls or advice on established media on how to avoid mobilization,” says Golovchenko. “You will see this on Telegram.”

The Kremlin is trying hard to gain supremacy on Telegram because of its current position as a rich seam of subterfuge for those opposed to Putin and his regime, Golovchenko adds. “What is at stake is the extent to which Telegram can amplify the idea that war is now part of Russia’s everyday life,” he says. “If Russians begin to realize their neighbors and friends and fathers are being killed en masse, that will be crucial.”



Source link


We are arguably a long way off from transplanting miniature brain blobs into people (although some have tried putting them in rodents). But we are getting closer to implanting other organoids—potentially those that resemble lungs, livers, or intestines, for example.

The latest progress has been made by Mírian Romitti at the Université libre de Bruxelles in Belgium and her colleagues, who have successfully created miniature, transplantable thyroids from stem cells.

The thyroid is a butterfly-shaped structure in the neck that makes hormones. A lack of these hormones can make people very sick. Around 5% of people have an underactive thyroid, or hypothyroidism, which can lead to fatigue, aches and pains, weight gain, and depression. It can affect brain development in children. And those who are affected often have to take a replacement hormone treatment every single day.

Transplanting organoids

After growing thyroid organoids in a lab for 45 days, Romitti and her colleagues could transplant them into mice that were lacking their own thyroids. The operation appeared to restore the production of thyroid hormones, essentially curing the animals’ hypothyroidism. “The animals were very happy,” as Romitti puts it.

The focus is now on finding a way to safely transplant similar organoids in people. There is plenty of demand—Romitti says her colleague is constantly getting calls and emails from people who are desperate to get a transplanted mini thyroid. But we’re not quite there yet.

Romitti and her teammates made their mini thyroids from stem cells—cells in a “naïve,” flexible state that can be encouraged to form any one of many cell types. It has taken the scientists a decade of research and multiple attempts to find a way to get the cells to form a structure that looks like a thyroid. The end result required genetic modification using a virus to infect the cells, and the team used several drugs to aid the growth of the organoids in a dish.

The stem cells the team used were embryonic stem cells—from a line of cells that were originally taken from a human embryo. These cells couldn’t be used clinically for several reasons—the recipient’s immune system would reject the cells as foreign, for example, and the destruction of embryos for disease treatments would be considered unethical. The next step is to use stem cells generated from a person’s own skin cells. In theory, mini organs created from these cells could be custom-made for individuals. Romitti says her team has made “promising” progress.

Of course, we’ll also have to make sure these organoids are safe. No one knows what they are likely to do in a human body. Will they grow? Shrink away and disappear? Form some kind of cancer? We’ll need more long-term studies to get a better idea of what might happen.



Source link


Ann Reardon is probably the last person whose content you’d expect to be banned from YouTube. A former Australian youth worker and a mother of three, she’s been teaching millions of loyal subscribers how to bake since 2011. But the removal email was referring to a video that was not Reardon’s typical sugar-paste fare.

Since 2018, Reardon has used her platform to warn viewers about dangerous new “craft hacks” that are sweeping YouTube, tackling unsafe activities such as poaching eggs in a microwave, bleaching strawberries, and using a Coke can and a flame to pop popcorn.

The most serious is “fractal wood burning”, which involves shooting a high-voltage electrical current across dampened wood to burn a twisting, turning branch-like pattern in its surface. The practice has killed at least 33 people since 2016.

On this occasion, Reardon had been caught up in the inconsistent and messy moderation policies that have long plagued the platform and in doing so, exposed a failing in the system: How can a warning about harmful hacks be deemed dangerous when the hack videos themselves are not? Read the full story.

—Amelia Tait

DeepMind’s new chatbot uses Google searches plus humans to give better answers

The news: The trick to making a good AI-powered chatbot might be to have humans tell it how to behave—and force the model to back up its claims using the internet, according to a new paper by Alphabet-owned AI lab DeepMind. 

How it works: The chatbot, named Sparrow, is trained on DeepMind’s large language model Chinchilla. It’s designed to talk with humans and answer questions, using a live Google search or information to inform those answers. Based on how useful people find those answers, it’s then trained using a reinforcement learning algorithm, which learns by trial and error to achieve a specific objective. Read the full story.

—Melissa Heikkilä

Sign up for MIT Technology Review’s latest newsletters

MIT Technology Review is launching four new newsletters over the next few weeks. They’re all brilliant, engaging and will get you up to speed on the biggest topics, arguments and stories in technology today. Monday is The Algorithm (all about AI), Tuesday is China Report (China tech and policy), Wednesday is The Spark (clean energy and climate), and Thursday is The Checkup (health and biotech).



Source link


“The problem is that literally anybody can watch these videos—kids, adults, it doesn’t matter,” she says. Matt first saw a fractal wood burning video shared by a friend on Facebook and was so intrigued that “he started watching YouTube videos on it—and they’re endless.” 

Matt was electrocuted when a piece of the casing around the jumper cables he was using came loose and his palm touched metal. “I truly believe if my husband had been fully aware [of the dangers], he wouldn’t have been doing it,” Schmidt says. Her plea is simple: “When you’re dealing with something that has the capability of killing somebody, there should always be a warning … YouTube needs to do a better job, and I know that they can, because they censor all types of people.” 

After Matt’s death, medical professionals from the University of Wisconsin wrote a paper entitled “Shocked Though the Heart and YouTube Is to Blame.” Citing Matt’s death and four fractal wood burning injuries they’d personally treated, they asked that “a warning label be inserted before users can access video content” on the crafting technique. “While it is not possible, or even desirable, to flag every video depicting a potentially risky activity,” they wrote, “it seems practical to apply a warning label to videos that could lead to instantaneous death when imitated.” 

Matt and Caitlin Schmidt had been best friends since they were 12 years old. He leaves behind three children. Schmidt says that her family has suffered “pain, loss and devastation” and will carry lifelong grief. “We are now the cautionary tale,” she says, “and I wish on everything in my life that we weren’t.” 


YouTube told MIT Technology Review its community guidelines prohibit content that’s intended to encourage dangerous activities or has an inherent risk of physical harm. Warnings and age restrictions are applied to graphic videos, and a combination of technology and human staff enforces the company’s guidelines. Dangerous videos banned by YouTube include challenges that pose an imminent risk of injury, pranks that cause emotional distress, drug use, the glorification of violent tragedies, and instructions on how to kill or harm. However, videos can depict dangerous acts if they contain sufficient educational, documentary, scientific, or artistic context. 

YouTube first introduced a ban on dangerous challenges and pranks in January 2019—a day after a blindfolded teenager crashed a car while participating in the so-called “Bird Box challenge.” 

YouTube removed “a number” of fractal wood burning videos and age-restricted others when approached by MIT Technology Review. But the company did not say why it moderates against pranks and challenges but not hacks. 

It would certainly be challenging to do so—each 5-Minute Crafts video contains numerous crafts, one after the other, many of which are simply bizarre but not harmful. And the ambiguity in hack videos—an ambiguity that is not present in challenge videos—can be difficult for human moderators to judge, let alone AI. In September 2020, YouTube reinstated human moderators who had been “put offline” during the pandemic after determining that its AI had been overzealous, doubling the number of incorrect takedowns between April and June. 



Source link


The difference between this approach and its predecessors is that DeepMind hopes to use “dialogue in the long term for safety,” says Geoffrey Irving, a safety researcher at DeepMind. 

“That means we don’t expect that the problems that we face in these models—either  misinformation or stereotypes or whatever—are obvious at first glance, and we want to talk through them in detail. And that means between machines and humans as well,” he says. 

DeepMind’s idea of using human preferences to optimize how an AI model learns is not new, says Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab. 

“But the improvements are convincing and show clear benefits to human-guided optimization of dialogue agents in a large-language-model setting,” says Hooker. 

Douwe Kiela, a researcher at AI startup Hugging Face, says Sparrow is “a nice next step that follows a general trend in AI, where we are more seriously trying to improve the safety aspects of large-language-model deployments.”

But there is much work to be done before these conversational AI models can be deployed in the wild. 

Sparrow still makes mistakes. The model sometimes goes off topic or makes up random answers. Determined participants were also able to make the model break rules 8% of the time. (This is still an improvement over older models: DeepMind’s previous models broke rules three times more often than Sparrow.) 

“For areas where human harm can be high if an agent answers, such as providing medical and financial advice, this may still feel to many like an unacceptably high failure rate,” Hooker says.The work is also built around an English-language model, “whereas we live in a world where technology has to safely and responsibly serve many different languages,” she adds.

And Kiela points out another problem: “Relying on Google for information-seeking leads to unknown biases that are hard to uncover, given that everything is closed source.” 



Source link


Despite President Biden’s assurances at Wednesday’s United Nations meeting that the US is not seeking a new cold war, one is brewing between the world’s autocracies and democracies—and technology is fueling it.

Late last week, Iran, Turkey, Myanmar, and a handful of other countries took steps toward becoming full members of the Shanghai Cooperation Organization (SCO), an economic and political alliance led by the authoritarian regimes of China and Russia.

The majority of SCO member countries, as well as other authoritarian states, are following China’s lead and are trending toward more digital rights abuses by increasing the mass digital surveillance of citizens, censorship, and controls on individual expression.

And while democracies also use massive amounts of surveillance technology, it’s the tech trade relationships between authoritarian countries that’s enabling the rise of digitally enabled social control. Read the full story.

—Tate Ryan-Mosley

Watch this team of drones 3D-print a tower

The news: A mini-swarm’s worth of drones have been trained to work together to 3D-print some simple towers. Inspired by the way bees or wasps construct large nests, the process has multiple drones work together to build from a single blueprint, with one essentially checking the others’ work as it goes. 

How it works: One drone deposits a layer of building material, and the other verifies the accuracy of everything printed so far. The drones are fully autonomous while flying, but they are monitored by a human who can step in if things go awry.

Why it matters: One day, the method could help with challenging projects such as post-disaster construction or even repairs on buildings that are too high to access safely, the team behind it hopes—and could construct buildings in the Arctic or even on Mars. Read the full story.



Source link


Beyond the SCO, Venezuela’s autocratic regime announced in 2017 a smart identification card for its citizens that aggregated employment, voting, and medical information with the help of the Chinese telecom company ZTE. And Huawei, another Chinese telecom corporation, boasts a global network of 700 localities with its smart city technology, according to the company’s 2021 annual report. This is up from 2015, when the company had about 150 international contracts in cities.

Chinese surveillance platforms used for policing and public security

Democracies are implicated in digital authoritarianism, too. The US has a formidable surveillance system built on a foundation of Chinese tech; a recent study by the industry research group Top10VPN showed over 700,000 US camera networks run by the Chinese companies Hikvision and Dahua. 

US companies also prop up much of the digital authoritarianism industry and are key players in complex supply chains, which makes isolation and accountability difficult. Intel, for example, powers servers for Tiandy, a Chinese company known for developing “smart interrogation chairs” reportedly used in torture. 

Networks of Hikvision and Dahua cameras outside China

Beyond the code 

Digital authoritarianism goes beyond software and hardware. More broadly, it’s about how the state can use technology to increase its control over its citizens. 

Internet blackouts caused by state actors, for instance, have been increasing every year for the past decade. The ability of a state to shut off the internet is tied to the extent of its ownership over internet infrastructure, a hallmark of authoritarian regimes like China and Russia. And as the internet becomes more essential to all parts of life, the power of blackouts to destabilize and harm people increases. 

Early this year, as anti-government protests rocked Kazakhstan, an SCO member, the state shut down the internet almost entirely for five days. During this time, Russian troops descended on major cities to quell the dissent. The blackout cost the country more than $400 million and cut off essential services. 

Other tactics include models for using data fusion and artificial intelligence to act on surveillance data. During last year’s SCO summit, Chinese representatives hosted a panel on the Thousand Cities Strategic Algorithms, which instructed the audience on how to develop a “national data brain” that integrates various forms of financial data and uses artificial intelligence to analyze and make sense of it. According to the SCO website, 50 countries are “conducting talks” with the Thousand Cities Strategic Algorithms initiative. 

Relatedly, the use of facial recognition technology is spreading globally, and investment in advanced visual computing technologies that help make sense of camera footage has also grown, particularly in Russia. 



Source link


The VR group requested significantly lower levels of the sedative propofol—in this case used to numb the pain in the hand— than the non-VR group. They received 125.3 milligrams per hour, in comparison to an average of 750.6 milligrams per hour during the study, described in PLoS ONE. The VR group also left the post-anesthesia recovery unit more quickly, spending an average of 63 minutes versus 75 minutes for the non-VR group.

The researchers believe that those in the VR group needed lower levels of the anesthetic because they were more distracted than those who didn’t have virtual visual stimuli. However, the team acknowledges, it’s possible that the VR group could have gone into surgery already believing that VR would be effective. This possibility will need to be explored in future trials. 

Reducing the amount of anesthetic a patient receives can help shorten hospital stays and lower the risk of complications, and it could save money on the cost of the drugs themselves.

The team now plans to run a similar subsequent trial in patients undergoing hip and knee surgery to continue exploring whether VR could help manage anxiety during operations, says Adeel Faruki, an assistant professor in anesthesiology at the University of Colorado, who led the study.

There’s a growing body of evidence that VR can be a useful surgery aid, says Brenda Wiederhold, cofounder of the Virtual Reality Medical Center, who was not involved in the study. However, medical experts would need to monitor patients for cyber sickness, a form of motion sickness that VR triggers in some people.

“We have so many use cases for VR and surgeries, like cesarean births and pre-and post-cardiac surgeries,” she says.

VR may be helpful not only during medical procedures but afterwards too, according to Wiederhold, by reducing the risk of chronic pain. “That’s pretty exciting,” she says.



Source link