Архив статей журнала

DIALOGUE AS AUTOCOMMUNICATION - ON INTERACTIONS WITH LARGE LANGUAGE MODELS (2024)
Выпуск: Том 5 Выпуск 2 (2024)
Авторы: Карташева Анна

In a dialog with large language models (LLM) there is a coincidence of the addressee and addressee of the message, so such a dialog can be called autocommunication. A neural network can only answer a question that has a formulation. The question is formulated by the one who asks it, i. e. a human being. Human activity in dialog with neural networks provokes thoughts about the nature of such dialog. Composing prompts is one of the most creative parts of dialog with neural networks. But it is worth noting that a neural network is often better at composing prompts than a human. Does this mean that humans need to develop their questioning skills? In LLM-based dialog systems, the main value to the user is the ability to clarify and structure their own thoughts. The structuring of thoughts happens through questioning, through formulating and clarifying questions. Asking the right question is practically answering that question. Thus, thanks to autocommunication, the development, transformation, and restructuring of the human “I” itself takes place. Dialogue with large linguistic models acts as a discursive practice that allows people to formulate their own thoughts and transform their self through autocommunication. It is worth noting that for this kind of dialog, a certain image of the audience is normative or determinative of the material that can be produced in response to a given question. This is because the data for model training is provided by people, even if they do not and have never thought about it. Thus, a dialogic relationship develops between the generated text and the questioning audience that develops all participants in the communication.

Сохранить в закладках
DO LANGUAGE MODELS COMMUNICATE? COMMUNICATIVE INTENT AND REFERENCE FROM A DERRIDEAN PERSPECTIVE (2024)
Выпуск: Том 5 Выпуск 2 (2024)
Авторы: Леон Ребека Перес

This paper assesses the arguments of Emily M. Bender, Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell in the influential article “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” These arguments disputed that Language Models (LM) can communicate and understand. In particular, I discuss the argument that LMs cannot communicate because their linguistic productions lack communicative intent and are not based on the real world or a model of the real world, which the authors regard as conditions for the possibility of communication and understanding. I argue that the authors’ view of communication and understanding is too restrictive and cannot account for vast instances of communication, not only human-to-human communication but also communications between humans and other entities. More concretely, I maintain that communicative intent is a possible but not necessary condition for communication and understanding, as it is oftentimes absent or unreliable. Communication need not be grounded in the real world in the sense of needing to refer to objects or state of affairs in the real world, because communication can very well be about hypothetical or unreal worlds and object. Drawing on Derrida’s philosophy, I elaborate alternative concepts of communication as the transmission of an operation of demotivation and overwhelming of interpretations with differential forces, and of understanding as the best guess or best interpretation. Based on these concepts, the paper argues that LMs could be said to communicate and understand.

Сохранить в закладках