Did you miss a session from the Future of Work Summit? Visit our Future of Work Summit on-demand library to stream.
In AI this week, reality knocked – for both AI Health Tech and the semi-autonomous driving system. IBM agreed to sell assets from its Watson Health business to investment firm Francisco Partners, sharply reducing the division’s operations. Meanwhile, the Insurance Institute for Highway Safety (IIHS) has announced a new rating program designed to assess how well-protected “partial” automation systems like Tesla’s Autopilot, funded by the insurance industry, protect against abuse.
Twin development symbolizes the AI industry’s perennial problem: accepting the limitations of AI. Slate’s Jeffrey Funk and Gary Smith have worked hard to recover overly optimistic AI predictions in recent years, including Ray Kurzweil’s announcement that computers would have “human-level” intelligence and the ability to “joke, be funny, be romantic.” Be loving, be sexy ”by 2029.
As any expert will confirm, AI is nowhere near human-level intelligence – emotional or otherwise. (Kurzweil’s new estimate is 2045.) Similarly, autonomous car and AI-powered healthcare has not reached the heights of futurists they once envisioned to reach. This is an important lesson in setting expectations – after all, the future is not easy to predict – but it is also an example of how profit-seeking supercharges the hype cycle. Under pressure to show ROI, some health tech and autonomous auto companies have broken down under the weight of their excessive promises, as this week’s news shows.
High riding on the win against Watson Crisis! Champion Ken Jennings, IBM launched Watson Health in 2015, positioning the suite of AI-powered services as the future of augmented care. The company’s sales pitch was such that Watson could analyze rims of health medical data – faster than any human doctor, apparently – to generate insights into improving health outcomes.
IBM has reportedly spent $ 4 billion to augment its Watson Health division with acquisitions, but the technology has proved ineffective at best – and detrimental at worst. A STAT report found that the platform often advises poor and unsafe cancer treatment because Watson Health’s model was trained using erroneous, artificial medical records instead of actual patient data.
The death of Watson Health may be partly attributed to the change in priorities of IBM CEO Arvind Krishna, but the growing frenzy about AI’s health capabilities also undoubtedly plays a role. Studies have shown that almost all ophthalmology datasets come from patients in North America, Europe and China, meaning that ophthalmology diagnostic algorithms are less likely to work well for ethnic groups in under-represented countries. An audit of the United Health Group algorithm determined that it could underestimate the number of black patients in need of more care. And the growing group of work suggests that skin cancer-detecting algorithms are less specific when used on black patients, as AI models are mostly trained on images of lighter skin patients.
Semi-autonomous, AI-powered driving systems are coming under similar scrutiny, especially since automakers are pushing a rollout of products they claim can almost drive cars. In October 2021, Tesla was ordered to provide data to the National Highway Traffic Safety Administration as part of an investigation into company car collisions with parked vehicles. The suspicion was that Tesla’s autopilot was responsible for the dangerous behavior – either partially or completely.
That is not an unreasonable assumption. Late last year, Tesla rolled out an update for autopilot with a bug that caused the automatic braking system in Tesla to be connected for no apparent reason. This caused the car to slow down while traveling down the highway, putting them at risk for the rear.
Tesla is not the only vendor struggling to perfect the semi-autonomous car technology. A serious 2020 study by the American Automobile Association found that most semi-autonomous systems on the market – including the Kia and BMW – face problems at an average rate of eight miles each. For example, when confronted with a disabled vehicle, the systems cause a collision 66% of the time.
In 2016, GM was forced to withdraw the rollout of its Super Cruise facility due to unforeseen problems. Ford recently delayed its BlueCruise system to make tech “easier.”
It brings us to this week’s news: Insurance companies’ rating program for assessing the security of semi-autonomous systems. The group hopes it will encourage automakers to design better, once the first set of ratings, currently in development, will be released this year.
“The way many of these systems work gives people the impression that they are capable of doing more than they really are,” said Alexandra Mલller, a research scientist at the insurance company. “But even when drivers understand the limitations of partial automation, their minds can wander. As human beings, it is harder for us to be vigilant when we are watching and waiting for a problem than when we are driving ourselves.” “
Suffice it to say that AI ભ whether self-driving or diagnostic – is inappropriate, as humans create it. Its vision Jetsons The future may be anxious, but when life is in danger, history shows that it is best to be extra careful.
For AI coverage, send news tips to Kyle Wiggers – and be sure to subscribe to the AI weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
Senior Staff Writer
VentureBeat’s mission is to become a digital town square for technical decision makers to gain knowledge about transformative technology and practices. Our site delivers essential information on data technologies and strategies so you can lead your organizations. We invite you to access, to become a member of our community:
- Up-to-date information on topics of interest to you
- Our newsletters
- Gated idea-leader content and discounted access to our precious events, such as Transform 2021: Learn more
- Networking features and more
Become a member