摘要: While much work in data science to date has focused on algorithmic scale and sophistication, safety — that is, safeguards against harm — is a domain no less worth pursuing. This is particularly true in applications like self-driving vehicles, where a machine learning system’s poor judgment might contribute to an accident.
While much work in data science to date has focused on algorithmic scale and sophistication, safety — that is, safeguards against harm — is a domain no less worth pursuing. This is particularly true in applications like self-driving vehicles, where a machine learning system’s poor judgment might contribute to an accident.
That’s why firms like Intel’s Mobileye and Nvidia have proposed frameworks to guarantee safe and logical decision-making, and it’s why OpenAI — the San Francisco-based research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, and others — today released Safety Gym. OpenAI describes it as a suite of tools for developing AI that respects safety constraints while training, and for comparing the “safety” of algorithms and the extent to which those algorithms avoid mistakes while learning.
Safety Gym is designed for reinforcement learning agents, or AI that’s progressively spurred toward goals via rewards (or punishments). They learn by trial and error, which can be a risky endeavor — the agents sometimes try dangerous behaviors that lead to errors.
......
Full Text: venturebeat
若喜歡本文,請關注我們的臉書 Please Like our Facebook Page: Big Data In Finance
留下你的回應
以訪客張貼回應