Deep learning, which has in recent years become the dominant technique for creating new AIs, uses enormous amounts of data and computing power to fuel complex, accurate models. These resources are more accessible for researchers at large companies and elite universities. A study from Western University suggests there has been a “de-democratization” in AI: the number of researchers able to contribute to cutting-edge developments is shrinking. This may contribute to some of the ethical challenges facing AI development, including privacy invasion, bias and the environmental impact of large models.

To combat these problems, researchers are trying to figure out how to do more with less. One such recent advance is called “less than one”-shot learning (LO-shot learning). The principle behind LO-shot learning is that it should be possible for an AI to learn about objects in the world without being fed an example of each one. This has been a major hurdle for contemporary AI systems, which often require thousands of examples to learn to distinguish objects. Humans, on the other hand, are often able to abstract away from existing examples in order to recognize new never-before-seen items.

Allowing AIs to learn with considerably less data is important for several reasons. First, by building in abstractions that capture the relationships between objects, this technique reduces the potential for bias. Currently, deep-learning systems fall prey to bias arising from irrelevant features in the data they use to train. A well-known example of this problem is that AI classifies dogs as wolves when shown images of dogs in a snowy environment—because most images of wolves feature them near snow. Being able to zero in on relevant aspects of the image would help prevent these mistakes. Reducing data needs thus makes these systems less liable to this sort of bias.

Next, the less extensive the data one needs to use, the less incentive exists to surveil people to build better algorithms. For example, soft distillation techniques have already impacted medical AI research, which trains its models using sensitive health information. In one recent paper, researchers used soft distillation in diagnostic x-ray imagery based on a small, privacy-preserving data set.

Finally, allowing AIs to learn with less plentiful data helps to democratize the field of artificial intelligence. With smaller AIs, academia can remain relevant and avoid the risk of professors being poached by industry Not only does LO-shot learning make the barriers to entry lower by reducing training costs and lowering data requirements, but it also provides more flexibility for users to create novel data sets and experiment with new approaches. By reducing the time spent on data and architecture engineering, researchers looking to leverage AI can spend more time focusing on the practical problems they are aiming to solve.

The author’s attitude towards LO-shot learning is________.

A

concerned

B

doubtful

C

neutral

D

appreciative

答案

D

解析
视频解析
menjieliefu media file download
  • 支付宝捐助
  • 微信捐助
appreciate menjieliefu
appreciate menjieliefu