This AI model can quickly read chest X-rays in intensive care

14 hours ago

A new time-saving AI method for reading chest X-rays during intubation has life-saving potential. Above, a doctor examines chest X-rays in an intensive care unit. (AP Photo/Rodrigo Abd)

A new artificial intelligence method rapidly and accurately estimates where a medical device such as a life support tube should be placed, allowing patients to receive near-instant care while easing the workload of busy radiologists and other health care workers.

The novel system was invented by a team of computer science researchers and radiologists at the University of California, Los Angeles. The model identifies a safe area for two widely used kinds of medical tubes to be placed inside a patient's body: the endotracheal tube, which is inserted into a person's windpipe and then connected to a ventilator to facilitate breathing, and the nasogastric tube, which connects the nose with the stomach and is typically used to deliver drugs or food. The UCLA researchers have noted that their system could be expanded to many other medical devices in the future. A patent application for the innovation was published by the World Intellectual Property Organization on April 15.

The first few moments after a person is brought into an emergency room or intensive care unit are crucial for their long-term health. "Timing is critical here. Minutes really count. You want to take action immediately," Matthew S. Brown told The Academic Times. Brown is a co-inventor of the system and director of the Center for Computer Vision and Imaging Biomarkers at UCLA. 

"I was recruited to UCLA back in the '90s," Brown continued. "Even then, artificial intelligence was seen as having the potential to create an impact in the way that images are interpreted."

AI could help to open up the busy schedules of radiologists like one of Brown's co-inventors, Fereidoun Abtin. Abtin was even interrupted momentarily during an interview with The Academic Times to speak with a nurse, underscoring the importance of the team's invention in real life. "Disturbances [like] a phone ringing, or just now, somebody walking in – human attention can get affected by these issues, where a computer doesn't. I think AI will make practicing medicine better," he said.

Today, radiologists have to look at an X-ray of a patient's chest to decide where a tube might best be placed. Though this process is relatively quick for radiologists, who have years of specialized training, there is often a bottleneck in hospitals, as the quantity of cases greatly outnumbers the radiologists available to pore over medical images.

One major challenge is the placement of endotracheal tubes. This process of intubation is common in the ICU among patients who have trouble breathing, including those with COVID-19. If an X-ray is misinterpreted or not ready quickly enough, the tube can mistakenly be placed in a patient's esophagus and cause severe complications or, in rare cases, death. 

"It's particularly tragic because clearly, it's a death that's preventable," Brown said. And with medical errors, the third leading cause of death in the U.S., claiming an estimated 251,000 American lives per year, innovations that help health care workers avoid mistakes are sorely needed.

The new system and method can determine a "Safe Zone" for placing medical tubes or lines in patients. It was developed with the help of a trained team of image analysts, who marked the placement of the tubes in 2,000 chest X-rays from patients, Brown noted. By annotating the images, the AI model learned to recognize both the anatomy of a patient and the Safe Zone for a device. The model uses landmarks within the human body to check that a patient is properly intubated.

"These are sick patients in the ER or OR. In some cases, it requires one mistake to tip them over and have very bad consequences," explained Abtin. "[Meanwhile], there are 400 to 500 X-rays being poured into the worklist, at least at UCLA. Figuring out which one to read first is a very important question." The new system may be able to help hospital nurses and doctors as they triage patients.

The AI model serves as a visual assistant for radiologists by highlighting exactly where the tube is, which can help a less experienced trainee or physician read an X-ray. It then identifies safe positions for tubes on a medical image and can alert staff if the tube is outside the intended area. "This [model] allows a second pair of eyes and another level of confidence" for doctors, Abtin noted. 

"The reason why we think what we've done is bold is because no other human puts their eyes on that AI output before it goes directly to the ICU physician," said Brown. "We've taken the extra step of having AI run in real time and immediately push that output back into the patient's medical record, so it's literally available within minutes of the scan." This near-simultaneous feedback may be crucial for patients who otherwise have to wait anywhere from 15 to 30 minutes for a radiologist to check an X-ray, Abtin noted.

Brown said his team's model will only continue to improve as it gets more training data from researchers and real cases. "We think that the technology might move further upstream — in other words, be integrated into the X-ray device itself," he said. The more instantaneous this process becomes, the better the care doctors can provide to patients struggling to breathe.

Yet Brown sounded a note of caution about the pace of AI adoption in medicine.

"You don't want to tie up new, potentially life-saving technologies in red tape," Brown said. "But on the other hand … It's no mean feat for AI to be put in a critical task like health care. It's one thing to recognize a face in your photo album, but no one's going to get hurt if that goes wrong. I think AI will come, but if it's done right, it's going to come a bit slower than what people are led to believe by the headlines."

The application for this patent, "System and method for determining a device safe zone," was filed Oct. 12, 2020 with the World Intellectual Property Organization. It was published April 15 with the application number PCT/US2020/055271. The earliest priority date was Oct. 11, 2019. The inventors of the pending patent are Matthew S. Brown, Dieter R. Enzmann, Koon-Pong Wong, Jonathan G. Goldin, Fereidoun Abtin, Morgan Daly and Liza Shrestha, University of California, Los Angeles.

Parola Analytics provided technical research for this story.

Saving
We use cookies to improve your experience on our site and to show you relevant advertising.