Tech News

Hundreds of AI tools have been built to capture Covida. None of them helped.

[ad_1]

It also blurs the origins of some data sets. This means that researchers lose important features that distort the training of their models. Many inadvertently used the data set to collect the breast scanners of children who were not hidden, to show what the cases were like. But as a result, IAs learned to identify children, not secrets.

Driggs ’team trained their model through a data set that included a mix of tests performed on patients lying down and standing. Because patients who were scanned while lying down were more likely to become seriously ill, AI learned to predict a serious hidden risk from a person’s position.

In other cases, it was seen that some AIs received text letters that some hospitals used to label scans. As a result, fonts in hospitals with more severe cases became predictors of exclusive risk.

Mistakes like this seem obvious in retrospect. Models can also be fixed by adjusting them if the researchers are aware of them. It is possible to acknowledge mistakes and release a model that is less accurate but misleading. But many of the tools were developed by AI researchers, doctors who had no specialization in detecting data errors, or medical researchers who had no mathematical ability to compensate for these errors.

A more subtle problem that Driggs points out is the bias of incorporation or the bias introduced at the point where a data set is labeled. For example, radiologists who created many medical examinations said they were labeled secret. But this inserts or inserts the biases of that particular doctor into the basic truth of a data set. Driggs says it would be much better to label a doctor’s opinion with the result of a PCR test than to label a doctor’s opinion. But in busy hospitals there is always no need for statistical time.

This has not prevented some of these tools from being put into clinical practice. Wynants says it’s not clear which ones are used or how. Hospitals will sometimes say that they use a tool only for research, which makes it difficult for doctors to assess how well they are based. “There’s a big secret,” he says.

Wynants asked a company that was marketing in-depth learning algorithms to share information about his approach, but he didn’t listen. He later found several models published by researchers associated with this company, all with a high risk of bias. “We don’t know what the company has accomplished,” he says.

According to Wynants, some hospitals sign non-disclosure agreements with medical vendors. When doctors were asked what algorithm or software they were using, they were sometimes told that they were not allowed to say so.

How to fix it

What is the repair? Better data would help, but in times of crisis this is very important. It is more important to make the most of the data sets we have. Driggs says it would be easiest for AI teams to collaborate with clinicians. Researchers should also share their models and report on how they have been trained so that others can be tested and based on them. “There are two things we can do today,” he says. “And they might solve 50% of the problems we’ve identified.”

Data acquisition would also be easier if the formats were standardized, says Bilal Mateen, of Wellcome Trust, a physician that conducts research on the clinical technology of a global health research organization based in London.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button