Category

Technology

Category

[ad_1]

A Roomba recorded a woman on the toilet. How did screenshots end up on social media?

This episode we go behind the scenes of an MIT Technology Review investigation that uncovered how sensitive photos taken by an AI powered vacuum were leaked and landed on the internet.

Reporting:

We meet:

  • Eileen Guo, MIT Technology Review
  • Albert Fox Cahn, Surveillance Technology Oversight Project

Credits:

This episode was reported by Eileen Guo and produced by Emma Cillekens and Anthony Green. It was hosted by Jennifer Strong and edited by Amanda Silverman and Mat Honan. This show is mixed by Garret Lang with original music from Garret Lang and Jacob Gorski. Artwork by Stephanie Arnett.

Full transcript:

[TR ID]

Jennifer: As more and more companies put artificial intelligence into their products, they need data to train their systems.

And we don’t typically know where that data comes from. 

But sometimes just by using a product, a company takes that as consent to use our data to improve its products and services. 

Consider a device in a home, where setting it up involves just one person consenting on behalf of every person who enters… and living there—or just visiting—might be unknowingly recorded.

I’m Jennifer Strong and this episode we bring you a Tech Review investigation of training data… that was leaked from inside homes around the world. 

[SHOW ID] 

Jennifer: Last year someone reached out to a reporter I work with… and flagged some pretty concerning photos that were floating around the internet. 

Eileen Guo: They were essentially, pictures from inside people’s homes that were captured from low angles, sometimes had people and animals in them that didn’t appear to know that they were being recorded in most cases.

Jennifer: This is investigative reporter Eileen Guo.

And based on what she saw… she thought the photos might have been taken by an AI powered vacuum. 

Eileen Guo: They looked like, you know, they were taken from ground level and pointing up so that you could see whole rooms, the ceilings, whoever happened to be in them…

Jennifer: So she set to work investigating. It took months.  

Eileen Guo: So first we had to confirm whether or not they came from robot vacuums, as we suspected. And from there, we also had to then whittle down which robot vacuum it came from. And what we found was that they came from the largest manufacturer, by the number of sales of any robot vacuum, which is iRobot, which produces the Roomba.

Jennifer: It raised questions about whether or not these photos had been taken with consent… and how they wound up on the internet. 

In one of them, a woman is sitting on a toilet.

So our colleague looked into it, and she found the images weren’t of customers… they were Roomba employees… and people the company calls ‘paid data collectors’.

In other words, the people in the photos were beta testers… and they’d agreed to participate in this process… although it wasn’t totally clear what that meant. 

Eileen Guo: They’re really not as clear as you would think about what the data is ultimately being used for, who it’s being shared with and what other protocols or procedures are going to be keeping them safe—other than a broad statement that this data will be safe.

Jennifer: She doesn’t believe the people who gave permission to be recorded, really knew what they agreed to. 

Eileen Guo: They understood that the robot vacuums would be taking videos from inside their houses, but they didn’t understand that, you know, they would then be labeled and viewed by humans or they didn’t understand that they would be shared with third parties outside of the country. And no one understood that there was a possibility at all that these images could end up on Facebook and Discord, which is how they ultimately got to us.

Jennifer: The investigation found these images were leaked by some data labelers in the gig economy.

At the time they were working for a data labeling company (hired by iRobot) called Scale AI.

Eileen Guo: It’s essentially very low paid workers that are being asked to label images to teach artificial intelligence how to recognize what it is that they’re seeing. And so the fact that these images were shared on the internet, was just incredibly surprising, given how incredibly surprising given how sensitive they were.

Jennifer: Labeling these images with relevant tags is called data annotation. 

The process makes it easier for computers to understand and interpret the data in the form of images, text, audio, or video.

And it’s used in everything from flagging inappropriate content on social media to helping robot vacuums recognize what’s around them. 

Eileen Guo: The most useful datasets to train algorithms is the most realistic, meaning that it’s sourced from real environments. But to make all of that data useful for machine learning, you actually need a person to go through and look at whatever it is, or listen to whatever it is, and categorize and label and otherwise just add context to each bit of data. You know, for self driving cars, it’s, it’s an image of a street and saying, this is a stoplight that is turning yellow, this is a stoplight that is green. This is a stop sign. 

Jennifer: But there’s more than one way to label data. 

Eileen Guo: If iRobot chose to, they could have gone with other models in which the data would have been safer. They could have gone with outsourcing companies that may be outsourced, but people are still working out of an office instead of on their own computers. And so their work process would be a little bit more controlled. Or they could have actually done the data annotation in house. But for whatever reason, iRobot chose not to go either of those routes.

Jennifer: When Tech Review got in contact with the company—which makes the Roomba—they confirmed the 15 images we’ve been talking about did come from their devices, but from pre-production devices. Meaning these machines weren’t released to consumers.

Eileen Guo: They said that they started an investigation into how these images leaked. They terminated their contract with Scale AI, and also said that they were going to take measures to prevent anything like this from happening in the future. But they really wouldn’t tell us what that meant.  

Jennifer: These days, the most advanced robot vacuums can efficiently move around the room while also making maps of areas being cleaned. 

Plus, they recognize certain objects on the floor and avoid them. 

It’s why these machines no longer drive through certain kinds of messes… like dog poop for example.

But what’s different about these leaked training images is the camera isn’t pointed at the floor…  

Eileen Guo: Why do these cameras point diagonally upwards? Why do they know what’s on the walls or the ceilings? How does that help them navigate around the pet waste, or the phone cords or the stray sock or whatever it is. And that has to do with some of the broader goals that iRobot has and other robot vacuum companies has for the future, which is to be able to recognize what room it’s in, based on what you have in the home. And all of that is ultimately going to serve the broader goals of these companies which is create more robots for the home and all of this data is going to ultimately help them reach those goals.

Jennifer: In other words… This data collection might be about building new products altogether.

Eileen Guo: These images are not just about iRobot. They’re not just about test users. It’s this whole data supply chain, and this whole new point where personal information can leak out that consumers aren’t really thinking of or aware of. And the thing that’s also scary about this is that as more companies adopt artificial intelligence, they need more data to train that artificial intelligence. And where is that data coming from? Is.. is a really big question.

Jennifer: Because in the US, companies aren’t required to disclose that…and privacy policies usually have some version of a line that allows consumer data to be used to improve products and services… Which includes training AI. Often, we opt in simply by using the product.

Eileen Guo: So it’s a matter of not even knowing that this is another place where we need to be worried about privacy, whether it’s robot vacuums, or Zoom or anything else that might be gathering data from us.

Jennifer: One option we expect to see more of in the future… is the use of synthetic data… or data that doesn’t come directly from real people. 

And she says companies like Dyson are starting to use it.

Eileen Guo: There’s a lot of hope that synthetic data is the future. It is more privacy protecting because you don’t need real world data. There have been early research that suggests that it is just as accurate if not more so. But most of the experts that I’ve spoken to say that that is anywhere from like 10 years to multiple decades out.

Jennifer: You can find links to our reporting in the show notes… and you can support our journalism by going to tech review dot com slash subscribe.

We’ll be back… right after this.

[MIDROLL]

Albert Fox Cahn: I think this is yet another wake up call that regulators and legislators are way behind in actually enacting the sort of privacy protections we need.

Albert Fox Cahn: My name’s Albert Fox Cahn. I’m the Executive Director of the Surveillance Technology Oversight Project.  

Albert Fox Cahn: Right now it’s the Wild West and companies are kind of making up their own policies as they go along for what counts as a ethical policy for this type of research and development, and, you know, quite frankly, they should not be trusted to set their own ground rules and we see exactly why with this sort of debacle, because here you have a company getting its own employees to sign these ludicrous consent agreements that are just completely lopsided. Are, to my view, almost so bad that they could be unenforceable all while the government is basically taking a hands off approach on what sort of privacy protection should be in place. 

Jennifer: He’s an anti-surveillance lawyer… a fellow at Yale and with Harvard’s Kennedy School.

And he describes his work as constantly fighting back against the new ways people’s data gets taken or used against them.

Albert Fox Cahn: What we see in here are terms that are designed to protect the privacy of the product, that are designed to protect the intellectual property of iRobot, but actually have no protections at all for the people who have these devices in their home. One of the things that’s really just infuriating for me about this is you have people who are using these devices in homes where it’s almost certain that a third party is going to be videotaped and there’s no provision for consent from that third party. One person is signing off for every single person who lives in that home, who visits that home, whose images might be recorded from within the home. And additionally, you have all these legal fictions in here like, oh, I guarantee that no minor will be recorded as part of this. Even though as far as we know, there’s no actual provision to make sure that people aren’t using these in houses where there are children.

Jennifer: And in the US, it’s anyone’s guess how this data will be handled.

Albert Fox Cahn: When you compare this to the situation we have in Europe where you actually have, you know, comprehensive privacy legislation where you have, you know, active enforcement agencies and regulators that are constantly pushing back at the way companies are behaving. And you have active trade unions that would prevent this sort of a testing regime with a employee most likely. You know, it’s night and day. 

Jennifer: He says having employees work as beta testers is problematic… because they might not feel like they have a choice.

Albert Fox Cahn: The reality is that when you’re an employee, oftentimes you don’t have the ability to meaningfully consent. You oftentimes can’t say no. And so instead of volunteering, you’re being voluntold to bring this product into your home, to collect your data. And so you’ll have this coercive dynamic where I just don’t think, you know, at, at, from a philosophical perspective, from an ethics perspective, that you can have meaningful consent for this sort of an invasive testing program by someone who is in an employment arrangement with the person who’s, you know, making the product.

Jennifer: Our devices already monitor our data… from smartphones to washing machines. 

And that’s only going to get more common as AI gets integrated into more and more products and services.

Albert Fox Cahn: We see evermore money being spent on evermore invasive tools that are capturing data from parts of our lives that we once thought were sacrosanct. I do think that there is just a growing political backlash against this sort of technological power, this surveillance capitalism, this sort of, you know, corporate consolidation.  

Jennifer: And he thinks that pressure is going to lead to new data privacy laws in the US. Partly because this problem is going to get worse.

Albert Fox Cahn: And when we think about the sort of data labeling that goes on the sorts of, you know, armies of human beings that have to pour over these recordings in order to transform them into the sorts of material that we need to train machine learning systems. There then is an army of people who can potentially take that information, record it, screenshot it, and turn it into something that goes public. And, and so, you know, I, I just don’t ever believe companies when they claim that they have this magic way of keeping safe all of the data we hand them, there’s this constant potential harm when we’re, especially when we’re dealing with any product that’s in its early training and design phase.

[CREDITS]

Jennifer: This episode was reported by Eileen Guo, produced by Emma Cillekens and Anthony Green, edited by Amanda Silverman and Mat Honan. And it’s mixed by Garret Lang, with original music from Garret Lang and Jacob Gorski.

Thanks for listening, I’m Jennifer Strong.

[ad_2]

Source link

[ad_1]

When I opened the email telling me I’d been accepted to run the London Marathon, I felt elated. And then terrified. Barely six months on from my last marathon, I knew how dedicated I’d have to be to keep running day after day, week after week, month after month, through rain, cold, tiredness, grumpiness, and hangovers.

The marathon is the easy part. It’s the constant grind of the training that kills you—and finding ways to keep it fresh and interesting is part of the challenge. Some exercise nuts think they’ve found a way to live their routines up: by using the AI chatbot ChatGPT as a sort of proxy personal trainer.

Its appeal is obvious. ChatGPT answers questions in seconds, saving the need to sift through tons of information, and asking follow-up questions will give you a more detailed and personalized answer. But is ChatGPT really the future of how we work out? Or is it just a confident bullshitter? Read the full story.

—Rhiannon Williams

How new technologies could clean up air travel

Aviation is a notorious “hard-to-decarbonize” sector. It makes up about 3% of the world’s greenhouse-gas emissions, and airline traffic could more than double from today’s levels by 2050. 

When it comes to flying, the technical challenge of cutting emissions is especially steep. Fuels for planes need to be especially light and compact, so planes can make it into the sky and still have room for people or cargo. But the industry has some promising ideas for cleaning up its act—and some of them are already taking off. Read the full story.

[ad_2]

Source link

[ad_1]

Hitting the gym

Despite the variable quality of ChatGPT’s fitness tips, some people have actually been following its advice in the gym. 

John Yu, a TikTok content creator based in the US, filmed himself following a six-day full-body training program courtesy of ChatGPT. He instructed it to give him a sample workout plan each day, tailored to which bit of his body he wanted to work (his arms, legs, etc), and then did the workout it gave him. 

The exercises it came up with were perfectly fine, and easy enough to follow. However, Yu  found that the moves lacked variety. “Strictly following what ChatGPT gives me is something I’m not really interested in,” he says. 

Lee Lem, a bodybuilding content creator based in Australia, had a similar experience. He asked ChatGPT to create an “optimal leg day” program. It suggested the right sorts of exercises—squats, lunges, deadlifts, and so on—but the rest times between them were far too brief. “It’s hard!” Lem says, laughing. “It’s very unrealistic to only rest 30 seconds between squat sets.”

Lem hit on the core problem with ChatGPT’s suggestions: they fail to consider human bodies. As both he and Yu found out, repetitive movements quickly leave us bored or tired. Human coaches know to mix their suggestions up. ChatGPT has to be explicitly told.

For some, though, the appeal of an AI-produced workout is still irresistible—and something they’re even willing to pay for. Ahmed Mire, a software engineer based in London, is selling ChatGPT-produced plans for $15 each. People give him their workout goals and specifications, and he runs them through ChatGPT. He says he’s already signed up customers since launching the service last month and is considering adding the option to create diet plans too. ChatGPT is free, but he says people pay for the convenience. 

What united everyone I spoke to was their decision to treat ChatGPT’s training suggestions as entertaining experiments rather than serious athletic guidance. They all had a good enough understanding of fitness, and what does and doesn’t work for their bodies, to be able to spot the model’s weaknesses. They all knew they needed to treat its answers skeptically. People who are newer to working out might  be more inclined to take them at face value.

The future of fitness?

This doesn’t mean AI models can’t or shouldn’t play a role in developing fitness plans. But it does underline that they can’t necessarily be trusted. ChatGPT will improve and could learn to ask its own questions. For example, it might ask users if there are any exercises they hate, or inquire about any niggling injuries. But essentially, it can’t come up with original suggestions, and it has no fundamental understanding of the concepts it is regurgitating

[ad_2]

Source link

[ad_1]

Batteries could power planes, at least for short distances. Some companies have been trying out test flights of electric planes powered this way, mostly small eVTOL (electric vertical take-off and landing) aircraft that can carry just a few people. Unlike combustion-powered aircraft, electric planes wouldn’t produce pollution, and they could reach zero emissions if charged with renewable energy.

Batteries have the benefit of being a technology that’s widely used today in electric vehicles, and they’ve gotten much better over their decades of development. But batteries will have to keep improving dramatically for electric planes to carry a significant number of people any significant distance. (Check out my story from last year on electric planes for more.)

Hydrogen could be a versatile fuel for aviation in the future. Planes might use hydrogen in two different ways. It could be burned in combustion engines, similar to how jet fuel is used today. Alternatively, hydrogen could be used in fuel cells, where chemical reactions generate electricity. We love to have options. 

Hydrogen’s environmental impact and feasibility will depend on how it’s being used. Combustion will lead to some tailpipe emissions, though these would be mostly water. Hydrogen-electric planes, like aircraft powered by batteries, could be free from climate pollution depending on how the hydrogen is produced. 

In either case, hydrogen has one key thing going for it: it contains a lot of energy without being too heavy (unlike batteries). When a vehicle has to lug its power source 30,000 feet into the air, it’s better for that power source to be really light—and hydrogen, as the lightest element on the periodic table, fits this bill perfectly. 

The problem is, while hydrogen is light, it also takes up a lot of space. In order to get it into a small enough volume to carry onboard a plane, hydrogen will likely need to be cooled to cryogenic temperatures (below -250 °C). Designing these systems and getting them onto planes will be difficult. So will sourcing and distributing large amounts of hydrogen made with renewable energy. And there’s the small fact that while there have been some experiments with flying hydrogen-powered planes over the years, the technology still needs work. It’s hard to remake an industry, which is why SAFs, the drop-in solution, are probably the most likely to be adopted in the near future, while hydrogen will take decades to break through. 

But there’s been some exciting movement on using hydrogen for aviation over the past couple of years, with big players like Airbus getting into the game and announcing planned test flights. 

And last week, startup ZeroAvia was in the news again, announcing it had completed a test flight of a 19-seat Dornier 228, the largest plane flown partly on hydrogen fuel cells. Before this test, the company had tested a smaller, nine-seat aircraft. 

[ad_2]

Source link

[ad_1]

The news: Eight years ago, a patient lost her power of speech because of ALS, or Lou Gehrig’s disease, which causes progressive paralysis. Now, after volunteering to receive a brain implant, the woman has been able to rapidly communicate phrases at a rate approaching normal speech.

Why it matters: Even in an era of keyboards, thumb-typing, emojis, and internet abbreviations, speech remains the fastest form of human-to-human communication. The scientists from Stanford University say their volunteer smashed previous records by using the brain-reading implant to communicate at a rate of 62 words a minute, three times the previous best. 

What’s next: Although the study has not been formally reviewed, experts have hailed the results as a significant breakthrough. The findings could pave the way for experimental brain-reading technology to leave the lab and become a useful product soon. Read the full story.

—Antonio Regalado

Resolving to live the Year of the Rabbit to the fullest

By Zeyi Yang, China reporter

This past Sunday was the Lunar New Year, the most important holiday for Chinese and several other Asian cultures. It’s supposed to be an opportunity for us to reset and seize new opportunities.

In that spirit, I’ve recently revisited some of my favorite China-focused MIT Technology Review stories from the last year and gone back to the people I interviewed. I asked them whether they’d resolved any troubling challenges, and what they’re hoping for in the Year of the Rabbit. 

[ad_2]

Source link

[ad_1]

6. The history of Zhongguancun, China’s Silicon Valley, explained. (Wired $)

7. A Chinese state-owned bank in Hong Kong is enticing new clients from the mainland with the possibility of getting mRNA vaccine shots. (Financial Times $)

Lost in translation

The new year is for new changes, and as Chinese tech publication Baobian reported, many Chinese Big Tech workers are quitting the industry and reflecting on how they ended up working pointless “bullshit jobs.”

Even though the country’s tech industry is relatively young, these companies, like their Western counterparts, have grown into gigantic corporations burdened with bureaucracy and low efficiency. A main source of frustration for staffers is feeling that they are spending months working on insignificant product changes that could be vetoed at the last minute. For example, making a simple UI design change requires two weeks of opposition research, and there’s little originality in the final product. Some workers also feel they are losing their individual purpose while helping the company optimize its money-making machinery.

Luyi, who worked for Tencent, Alibaba, and ByteDance in different positions, felt that she was chasing abstract numbers based on unreliable data analytics, and ultimately achieving nothing. Last year, she finally decided to quit the tech industry and went to work for an art gallery in Beijing. “When I successfully organize an art exhibit, there’s an immense sense of achievement. I can get a lot of positive feedback on the scene,” she said. That’s the feeling she was missing when she worked in Big Tech.

One more thing

To celebrate the transition from the Year of the Tiger to the Year of the Rabbit, a zoo in western China organized a ceremony on Friday in which a tiger cub and a rabbit were placed on the same table. But the video was promptly cut when the tiger went for the rabbit’s neck, the correspondent began shouting in panic, and the scene descended into chaos. Fortunately, the rabbit was reportedly unharmed. Otherwise it would have been a terrible omen for the new year.

Screenshot of the video when the rabbit and the tiger cub are about to be placed on the table.

[ad_2]

Source link

[ad_1]

In the new research, the Stanford team wanted to know if neurons in the motor cortex contained useful information about speech movements, too. That is, could they detect how “subject T12” was trying to move her mouth, tongue, and vocal cords as she attempted to talk?

These are small, subtle movements, and according to Sabes, one big discovery is that just a few neurons contained enough information to let a computer program predict, with good accuracy, what words the patient was trying to say. That information was conveyed by Shenoy’s team to a computer screen, where the patient’s words appeared as they were spoken by the computer.

The new result builds on previous work by Edward Chang at the University of California, San Francisco, who has written that speech involves the most complicated movements people make. We push out air, add vibrations that make it audible, and form it into words with our mouth, lips, and tongue. To make the sound “f,” you put your top teeth on your lower lip and push air out—just one of dozens of mouth movements needed to speak.  

A path forward

Chang previously used electrodes placed on top of the brain to permit a volunteer to speak through a computer, but in their preprint, the Stanford researchers say their system is more accurate and three to four times faster.

“Our results show a feasible path forward to restore communication to people with paralysis at conversational speeds,” wrote the researchers, who included Shenoy and neurosurgeon Jaimie Henderson.

David Moses, who works with Chang’s team at UCSF, says the current work reaches “impressive new performance benchmarks.” Yet even as records continue to be broken, he says, “it will become increasingly important to demonstrate stable and reliable performance over multi-year time scales.” Any commercial brain implant could have a difficult time getting past regulators, especially if it degrades over time or if the accuracy of the recording falls off.

The path forward is likely to include both more sophisticated implants and closer integration with artificial intelligence. 

The current system already uses a couple of types of machine learning programs. To improve its accuracy, the Stanford team employed software that predicts what word typically comes next in a sentence. “I” is more often followed by “am” than “ham,” even though these words sound similar and could produce similar patterns in someone’s brain. 

[ad_2]

Source link

[ad_1]

When it comes to the climate, the picture can look bleak.

Emissions of the greenhouse gasses that cause climate change are estimated to have reached new heights in 2022. Meanwhile, climate disasters, from record heat waves in China and Europe to devastating floods in Pakistan, seem to be hitting at a breakneck pace.

But a close look at global data shows that there are a few bright spots of good news, and a lot of potential progress ahead. Renewable sources make up a growing fraction of the energy supply, and they’re getting cheaper every year. Countries are also  setting new targets for emissions reductions, and unprecedented public investments could unlock more technological advances. 

So despite what can feel like a barrage of bad news, there are at least a few reasons to be hopeful. Read the full story.

—Casey Crownhart

These simple design rules could turn the chip industry on its head

Since the computer was invented, humans have devised many programming languages to command them to do our bidding. For a chip to execute your code, software must translate it into instructions a chip can use. So engineers designate specific binary sequences to prompt the hardware to perform certain actions, known as the computer’s instruction set.  

[ad_2]

Source link

[ad_1]

This is a far cry from the field’s reputation in the 1990s, when Wooldridge was finishing his PhD. AI was still seen as a weird, fringe pursuit; the wider tech sector viewed it in a similar way to how established medicine views homeopathy, he says. 

Today’s AI research boom has been fueled by neural networks, which saw a big breakthrough in the 1980s and work by simulating the patterns of the human brain. Back then, the technology hit a wall because the computers of the day weren’t powerful enough to run the software. Today we have lots of data and extremely powerful computers, which makes the technique viable. 

New breakthroughs, such as the chatbot ChatGPT and the text-to-image model Stable Diffusion, seem to come every few months. Technologies like ChatGPT are not fully explored yet, and both industry and academia are still working out how they can be useful, says Stone. 

Instead of a full-blown AI winter, we are likely to see a drop in funding for longer-term AI research and more pressure to make money using the technology, says Wooldridge. Researchers in corporate labs will be under pressure to show that their research can be integrated into products and thus make money, he adds.

That’s already happening. In light of the success of OpenAI’s ChatGPT, Google has declared a “code red” threat situation for its core product, Search, and is looking to aggressively revamp Search with its own AI research. 

Stone sees parallels to what happened at Bell Labs. If Big Tech’s AI labs, which dominate the sector, turn away from deep, longer-term research and focus too much on shorter-term product development, exasperated AI researchers may leave for academia, and these big labs could lose their grip on innovation, he says. 

That’s not necessarily a bad thing. There are a lot of smart people looking for jobs at the moment. Venture capitalists are looking for new startups to invest in as crypto fizzles out, and generative AI has shown how the technology can be made into products. 

This moment presents the AI sector with a once-in-a-generation opportunity to play around with the potential of new technology. Despite all the gloom around the layoffs, it’s an exciting prospect. 

[ad_2]

Source link

[ad_1]

But the silicon switches in your laptop’s central processor don’t inherently understand the word “for” or the symbol “=.” For a chip to execute your Python code, software must translate these words and symbols into instructions a chip can use.  

Engineers designate specific binary sequences to prompt the hardware to perform certain actions. The code “100000,” for example, could order a chip to add two numbers, while the code “100100” could ask it to copy a piece of data. These binary sequences form the chip’s fundamental vocabulary, known as the computer’s instruction set. 

For years, the chip industry has relied on a variety of proprietary instruction sets. Two major types dominate the market today: x86, which is used by Intel and AMD, and Arm, made by the company of the same name. Companies must license these instruction sets—which can cost millions of dollars for a single design. And because x86 and Arm chips speak different languages, software developers must make a version of the same app to suit each instruction set. 

Lately, though, many hardware and software companies worldwide have begun to converge around a publicly available instruction set known as RISC-V. It’s a shift that could radically change the chip industry. RISC-V proponents say that this instruction set makes computer chip design more accessible to smaller companies and budding entrepreneurs by liberating them from costly licensing fees. 

“There are already billions of RISC-V-based cores out there, in everything from earbuds all the way up to cloud servers,” says Mark Himelstein, the CTO of RISC-V International, a nonprofit supporting the technology. 

In February 2022, Intel itself pledged $1 billion to develop the RISC-V ecosystem, along with other priorities. While Himelstein predicts it will take a few years before RISC-V chips are widespread among personal computers, the first laptop with a RISC-V chip, the Roma by Xcalibyte and DeepComputing, became available in June for pre-order.

What is RISC-V?

You can think of RISC-V (pronounced “risk five”) as a set of design norms, like Bluetooth, for computer chips. It’s known as an “open standard.” That means anyone—you, me, Intel—can participate in the development of those standards. In addition, anyone can design a computer chip based on RISC-V’s instruction set. Those chips would then be able to execute any software designed for RISC-V. (Note that technology based on an “open standard” differs from “open-source” technology. An open standard typically designates technology specifications, whereas “open source” generally refers to software whose source code is freely available for reference and use.)

A group of computer scientists at UC Berkeley developed the basis for RISC-V in 2010 as a teaching tool for chip design. Proprietary central processing units (CPUs) were too complicated and opaque for students to learn from. RISC-V’s creators made the instruction set public and soon found themselves fielding questions about it. By 2015, a group of academic institutions and companies, including Google and IBM, founded RISC-V International to standardize the instruction set. 

[ad_2]

Source link