||
人的智能和人工智能:问题发现者和问题解决者
By Ben Dickson - February 14, 2022
https://bdtechtalks.com/2022/02/14/ai-humans-problem-solvers-problem-finders/
一,译文
上周宣布的AlphaCode,DeepMind的源码生成深度学习系统,围绕着人工智能的进步创造了很多兴奋点—其中一些是没有必要的。
正如我在对AlphaCode的深入研究中提到的,DeepMind的研究人员在汇集正确的技术和实践方面做得很好,创造了一个可以找到解决非常复杂问题的机器学习模型。
然而,媒体对AlphaCode的报道有时很夸张,这凸显了在为人类举办的比赛中,对人工智能不断增长的能力进行框架化的普遍问题。
用测试来衡量智能
几十年来,人工智能研究人员和科学家一直在寻找可以衡量人工通用智能进展的测试。在设想了人工智能与人类思维的关系后,他们转向了人类智能的基准。
由于多维度和主观性,人类智能可能很难衡量。但总的来说,有一些测试和比赛,大多数人都认为是良好认知能力的标志。
把每场比赛看作是一个将问题映射到解决方案的函数。提供给你一个问题,无论是棋盘、围棋、编程挑战,还是科学问题,你必须把它映射成一个解决方案。解空间的大小取决于问题。例如,围棋的解空间要比国际象棋大得多,因为它有一个更大的棋盘和更多的可能棋步。另一方面,编程挑战有更大的解空间。有数以百计的可能指令,可以用几乎无穷无尽的方式进行组合。
但在每一种情况下,问题都与解决方案相匹配,解决方案可以与预期结果相权衡,无论是游戏的输赢、回答正确的问题、奖励最大化,还是通过编程挑战的测试案例。
当涉及到我们人类时,这些比赛真正考验了我们的智力极限。考虑到大脑的计算极限,我们无法用暴力手段来解决空间问题。没有一个国际象棋或围棋选手能在合理的时间内评估每一轮的数百万或数千步棋。同样地,程序员也不能随机检查每一组可能的指令,直到有一个指令产生了问题的解决方案。
我们从合理的直觉开始(abduction),将问题与之前看到的模式相匹配(induction),并不断应用一套已知的规则(deduction),直到我们将解决方案完善为一个可接受的解决方案。我们通过训练和实践来磨练这些技能,我们变得更善于在竞赛中找到好的解决方案。
在掌握这些竞赛的过程中,我们发展了许多可以应用于其他问题的一般认知技能,如计划、策略、设计模式、思维理论、综合、分解以及批判性和抽象思维。这些技能在其他现实世界的环境中也很有用,如商业、教育、科学研究、产品设计和军事。
在更专业的领域,如数学或编程,测试具有更多实际意义。例如,在编码比赛中,程序员必须将一个问题陈述分解成更小的部分,然后设计一个算法来解决每个部分,并将其全部拼合起来。这些问题往往有有趣的转折,要求参赛者以新颖的方式思考,而不是使用脑海中的第一个解决方案。
有趣的是,你在这些比赛中看到的很多挑战与程序员日常编写的代码类型关系不大,如从数据库中提取数据,调用API,或设置网络服务器。
但你可以期待一个在编码比赛中排名高的人拥有许多需要多年学习和练习的一般技能。这就是为什么许多公司将编码挑战作为评估潜在雇员的重要工具。否则说,编码竞赛是一个很好的代表,可以说明为成为一个好的程序员所做的努力。
将问题映射到解决方案
当竞赛、游戏和测试被应用于人工智能时,大脑的计算极限不再适用。而这就为人类大脑无法实现的捷径创造了机会。
以国际象棋和围棋为例,这两种棋类游戏在过去几十年中受到了人工智能界的广泛关注。国际象棋曾被称为人工智能的果蝇。1996年,DeepBlue击败了国际象棋大师Garry Kasparov。但DeepBlue并不具备其人类对手的一般认知能力。相反,它利用IBM超级计算机的巨大计算能力,每秒钟评估数百万步棋,并选择最好的一步,这是一项超出人脑能力的成就。
当时,科学家和未来学家们认为,中国的围棋游戏在相当长的一段时间内仍将是人工智能系统无法企及的,因为它的解题空间要大得多,需要的计算能力在数十年内都无法获得。2016年,AlphaGo击败了围棋大师李世石,证明他们错了。
但同样,AlphaGo并没有像它的人类对手那样下棋。它利用了机器学习和计算硬件方面的进步。它在以前下过的大量游戏数据集上接受了训练—比任何人类一生中能下的游戏都要多。它使用了深度强化学习和蒙特卡洛树搜索(MCTS)—以及谷歌服务器的计算能力--来寻找每个回合的最佳动作。它没有像DeepBlue那样对每一个可能的步骤进行暴力搜索,但它仍然在每一个回合评估了数百万个步骤。
AlphaCode是一个更令人印象深刻的壮举。它使用转化器—一种特别擅长处理连续数据的深度学习架构—将一个自然语言问题陈述映射到数千个可能的解决方案。然后它使用过滤和聚类来选择模型提出的10个最有希望的解决方案。尽管令人印象深刻,但AlphaCode的解决方案开发过程与人类程序员的开发过程非常不同。
人类是问题发现者,人工智能是问题解决者
当被认为等同于人类智能时,人工智能的进展会让我们得出各种错误的结论,比如机器人接管世界,深度神经网络变得有意识,以及AlphaCode与普通人类程序员一样好。
但如果从搜索解空间的框架来看,它们就有了不同的意义。在上述每个案例中,即使人工智能系统产生的结果与人类相似或更好,它们所使用的过程也与人类的思维有很大不同。事实上,这些成就证明,当你把竞争简化为一个定义明确的搜索问题时,那么只要有正确的算法、规则、数据和计算能力,你就可以创建一个人工智能系统,它可以找到正确的解决方案,而不需要经过人类掌握的任何中介技能。
只要结果是可以接受的,有些人可能会否定这种差异。但是当涉及到解决现实世界的问题时,那些被认为是理所当然的、没有在测试中测量的中间技能往往比测试分数本身更重要。
这对人类智能的未来意味着什么?我喜欢把人工智能—至少在其目前的形式下—看作是人类智能的延伸而不是替代。像AlphaCode这样的技术不能思考和设计自己的问题,这是人类创造力和创新的关键因素之一,但它们是非常好的问题解决者。它们为人的智能和人工智能之间非常有成效的合作创造了独特的机会。人定义问题,设定奖励或预期结果,而人工智能则通过以超人的速度找到潜在的解决方案来提供帮助。
这种共生关系有几个有趣的例子,包括最近的一个项目,在这个项目中,谷歌的研究人员将芯片的平面布局任务制定为一个游戏,并让一个强化学习模型评估许多潜在的解决方案,直到它找到一个最佳安排。另一个流行的趋势是像AutoML这样的工具的出现,它通过搜索架构和超参数值的最佳配置,使开发机器学习模型的各个方面自动化。AutoML使那些在数据科学和机器学习方面没有什么经验的人有可能开发出ML模型,并将其应用于他们的应用程序。同样,像AlphaCode这样的工具将为程序员提供更深入地思考具体问题的机会,将其制定为定义明确的语句和预期结果,并让人工智能系统产生新颖的解决方案,可能为应用开发提出新的方向。
深度学习的这些渐进式进展是否最终会导致AGI的出现,还有待观察。但可以肯定的是,这些技术的成熟将逐渐产生任务分配的转变,人类成为问题的发现者,而人工智能成为问题的解决者。
二,原文
Humans and AI: Problem finders and problem solvers
By Ben Dickson - February 14, 2022
Last week’s announcement of AlphaCode, DeepMind’s source code–generating deep learning system, created a lot of excitement—some of it unwarranted—surrounding advances in artificial intelligence.
As I’ve mentioned in my deep dive on AlphaCode, DeepMind’s researchers have done a great job in bringing together the right technology and practices to create a machine learning model that can find solutions to very complex problems.
However, the sometimes-bloated coverage of AlphaCode by the media highlights the endemic problems with framing the growing capabilities of artificial intelligence in the context of competitions meant for humans.
Measuring intelligence with tests
For decades, AI researchers and scientists have been searching for tests that can measure progress toward artificial general intelligence. And having envisioned AI in the image of the human mind, they have turned to benchmarks for human intelligence.
Being multidimensional and subjective, human intelligence can be difficult to measure. But in general, there are some tests and competitions that most people agree are indicative of good cognitive abilities.
Think of every competition as a function that maps a problem to a solution. You’re provided with a problem, whether it’s a chessboard, a go board, a programming challenge, or a science question. You must map it to a solution. The size of the solution space depends on the problem. For example, go has a much larger solution space than chess because it has a larger board and a bigger number of possible moves. On the other hand, programming challenges have an even vaster solution space: There are hundreds of possible instructions that can be combined in nearly endless ways.
But in each case, a problem is matched with a solution and the solution can be weighed against an expected outcome, whether it’s winning or losing a game, answering the right question, maximizing a reward, or passing the test cases of the programming challenge.
When it comes to us humans, these competitions really test the limits of our intelligence. Given the computational limits of the brain, we can’t brute-force our way through the solution space. No chess or go player can evaluate millions or thousands of moves at each turn in a reasonable amount of time. Likewise, a programmer can’t randomly check every possible set of instructions until one results in the solution to the problem.
We start with a reasonable intuition (abduction), match the problem to previously seen patterns (induction), and apply a set of known rules (deduction) continuously until we refine our solution to an acceptable solution. We hone these skills through training and practice, and we become better at finding good solutions to the competitions.
In the process of mastering these competitions, we develop many general cognitive skills that can be applied to other problems, such as planning, strategizing, design patterns, theory of mind, synthesis, decomposition, and critical and abstract thinking. These skills come in handy in other real-world settings, such as business, education, scientific research, product design, and the military.
In more specialized fields, such as math or programming, tests take on more practical implications. For example, in coding competitions, the programmer must decompose a problem statement into smaller parts, then design an algorithm that solves each part and put it all back together. The problems often have interesting twists that require the participant to think in novel ways instead of using the first solution that comes to mind.
Interestingly, a lot of the challenges you’ll see in these competitions have very little to do with the types of code programmers write daily, such as pulling data from a database, calling an API, or setting up a web server.
But you can expect a person who ranks high in coding competitions to have many general skills that require years of study and practice. This is why many companies use coding challenges as an important tool to evaluate potential hires. Otherwise said, competitive coding is a good proxy for the effort that goes into making a good programmer.
Mapping problems to solutions
When competitions, games, and tests are applied to artificial intelligence, the computational limits of the brain no longer apply. And this creates the opportunity for shortcuts that the human mind can’t achieve.
Take chess and go, two board games that have received much attention from the AI community in the past decades. Chess was once called the drosophila of artificial intelligence. In 1996, DeepBlue defeated chess grandmaster Garry Kasparov. But DeepBlue did not have the general cognitive skills of its human opponent. Instead, it used the sheer computational power of IBM’s supercomputers to evaluate millions of moves every second and choose the best one, a feat that is beyond the capacity of the human brain.
At the time, scientists and futurists thought that the Chinese board game go would remain beyond the reach of AI systems for a good while because it had a much larger solution space and required computational power that would not become available for several decades. They were proven wrong in 2016 when AlphaGo defeated go grandmaster Lee Sedol.
But again, AlphaGo didn’t play the game like its human opponent. It took advantage of advances in machine learning and computation hardware. It had been trained on a large dataset of previously played games—a lot more than any human can play in their entire life. It used deep reinforcement learning and Monte Carlo Tree Search (MCTS)—and again the computational power of Google’s servers—to find optimal moves at each turn. It didn’t do a brute-force survey of every possible move like DeepBlue, but it still evaluated millions of moves at every turn.
AlphaCode is an even more impressive feat. It uses transformers—a type of deep learning architecture that is especially good at processing sequential data—to map a natural language problem statement to thousands of possible solutions. It then uses filtering and clustering to choose the 10 most-promising solutions proposed by the model. Impressive as it is, however, AlphaCode’s solution-development process is very different from that of a human programmer.
Humans are problem finders, AIs are problem solvers
When thought of as the equivalent of human intelligence, advances in AI lead us to all kinds of wrong conclusions, such as robots taking over the world, deep neural networks becoming conscious, and AlphaCode being as good as an average human programmer.
But when viewed in the framework of searching solution spaces, they take on a different meaning. In each of the cases described above, even if the AI system produces outcomes that are similar to or better than those of humans, the process they use is very different from human thinking. In fact, these achievements prove that when you reduce a competition to a well-defined search problem, then with the right algorithm, rules, data, and computation power, you can create an AI system that can find the right solution without going through any of the intermediary skills that humans acquire when they master the craft.
Some might dismiss this difference as long as the outcome is acceptable. But when it comes to solving real-world problems, those intermediary skills that are taken for granted and not measured in the tests are often more important than the test scores themselves.
What does this mean for the future of human intelligence? I like to think of AI—at least in its current form—as an extension instead of a replacement for human intelligence. Technologies such as AlphaCode cannot think about and design their own problems—one of the key elements of human creativity and innovation—but they are very good problem solvers. They create unique opportunities for very productive cooperation between humans and AI. Humans define the problems, set the rewards or expected outcomes, and the AI helps by finding potential solutions at superhuman speed.
There are several interesting examples of this symbiosis, including a recent project in which Google’s researchers formulated a chip floor-planing task as a game and had a reinforcement learning model evaluate numerous potential solutions until it found an optimal arrangement. Another popular trend is the emergence of tools like AutoML, which automate aspects of developing machine learning models by searching for optimal configurations of architecture and hyperparameter values. AutoML is making it possible for people with little experience in data science and machine learning to develop ML models and apply them to their applications. Likewise, a tool like AlphaCode will provide programmers to think more deeply about specific problems, formulate them into well-defined statements and expected results, and have the AI system generate novel solutions that might suggest new directions for application development.
Whether these incremental advances in deep learning will eventually lead to AGI remains to be seen. But what’s for sure is that the maturation of these technologies will gradually create a shift in task assignment, where humans become problem finders and AIs become problem solvers.
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-9 07:09
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社