上海大学学报(社会科学版) ›› 2023, Vol. 40 ›› Issue (6): 63-78.

• • 上一篇    下一篇

GPT语言模型的心智问题、影响与风险 ——从ChatGPT谈起

  

  1. 扬州大学 美术与设计学院
  • 收稿日期:2023-06-10 出版日期:2023-11-15 发布日期:2023-11-09
  • 作者简介:简圣宇(1981- ),男,广西南宁人。扬州大学美术与设计学院教授,哲学博士。研究方 向: 人工智能美学、当代美学思潮。
  • 基金资助:
    国家社会科学基金一般项目(20BZX131)

ChatGPT: Mind, Implications, and Risks of GPT Language Models

  1. College of Fine Arts and Design, Yangzhou University
  • Received:2023-06-10 Online:2023-11-15 Published:2023-11-09

摘要: 以ChatGPT为代表的生成式预训练转化语言模型展现出了初步模拟人类心智的能 力,并有可能经过持续优化和技术迭代演化成人们日常工作和生活中不可或缺的通用技术。 当GPT语言模型开始具有类人心智并且获得产业化运用之后,将带来远超之前时代的新增生 产力,乃至能重塑人类社会基本格局。然而,GPT语言模型在推动生产力发展的同时也会带 来岗位替代等问题,进而引发技术发展过快而带来的衍生风险。GPT语言模型及其衍生应用 融入社会生产生活应当是一个渐进式的稳健过程,从而避免因为缺少缓冲时间对社会造成过 大冲击。

关键词: :GPT, 语言模型, 类人心智, 就业, 风险

Abstract: GPT (Generative pre-training transformation) language models, represented by ChatGPT, show the ability to simulate human mind. Their continuous optimization and technological iterations makes it possible for them to become an indispensable part of people’s daily work and life. Once GPT language models start to have the human-like mind and obtain industrial application, they will bring new productivity like never before, and even reshape the basic pattern of human society. However, GPT language models not only promote productivity, but also incur such problems as job replacement and derivative risks resulting from extremely rapid technological upgrading. The integration of GPT language models and their derivative applications into social production and social life should be a gradual and steady process, so as to create buffer time and avoid excessive impact on society

Key words: GPT, language model, human-like mind, employment, risk

中图分类号: