Spring AI - Multimodality - Orbis Sensualium Pictus

原文英文,约700词,阅读约需3分钟。发表于:

Humans process knowledge, simultaneously across multiple modes of data inputs. The way we learn, our experiences are all multimodal. We don't have just vision, just audio and just text. These foundational principles of learning were articulated by the father of modern education John Amos Comenius, in his work, "Orbis Sensualium Pictus", dating back to 1658. "All things that are naturally connected ought to be taught in combination" Contrary to those principles, in the past, our approach to Machine Learning was often focused on specialised models tailored to process a single modality. For instance, we developed audio models for tasks like text-to-speech or speech-to-text, and computer vision models for tasks such as object detection and classification. However, a new wave of multimodal large language models starts to emerge. Examples include OpenAI's GPT-4 Vision, Google's Vertex AI Gemini Pro Vision, Anthropic's Claude3, and open source offerings LLaVA and balklava are able to accept multiple inputs, including text images, audio and video and generate text responses by integrating these inputs. The multimodal large language model (LLM) features enable the models to process and generate text in conjunction with other modalities such as images, audio, or video. Spring AI - Multimodality Multimodality refers to a model’s ability to simultaneously understand and process information from various sources, including text, images, audio, and other data formats. The Spring AI Message API provides all necessary abstractions to support multimodal LLMs. The Message’s content field is used as primarily text inputs, while the, optional, media field allows adding one or more additional content of different modalities such as images, audio and video. The MimeType specifies the modality type. Depending on the used LLMs the Media's data field can be either encoded raw media content or an URI to the content. Note: The media field is currently applicable only for user input messages, e.g. UserMessage. Example Lets for example take the following picture (multimodal.test.png) as an input and ask the LLM to explain what it sees in the picture. For most multimodal LLMs, the Spring AI code would look something like this: byte[] imageData = new ClassPathResource("/multimodal.test.png").getContentAsByteArray(); var userMessage = new UserMessage( "Explain what do you see in this picture?", // text content List.of(new Media(MimeTypeUtils.IMAGE_PNG, imageData))); // image content ChatResponse response = chatClient.call(new Prompt(List.of(userMessage))); and produce a response like: This is an image of a fruit bowl with a simple design. The bowl is made of metal with curved wire edges that create an open structure, allowing the fruit to be visible from all angles. Inside the bowl, there are two yellow bananas resting on top of what appears to be a red apple. The bananas are slightly overripe, as indicated by the brown spots on their peels. The bowl has a metal ring at the top, likely to serve as a handle for carrying. The bowl is placed on a flat surface with a neutral-colored background that provides a clear view of the fruit inside. Latest (1.0.0-SANPSHOT) version of Spring AI provides multimodal support for the following Chat Clients: Open AI - (GPT-4-Vision model) Ollama - (LLaVa and Baklava models) Vertex AI Gemini - (gemini-pro-vision model) Anthropic Claude 3 AWS Bedrock Anthropic Claude 3 Next steps Next, the Spring AI will rework the Document API to add multimodality support similar to the Message API. Currently the AWS Bedrock Titan EmbeddingClient supports image embeddings. It would be required to integrate additional multimodal Embedding services to allow encoding, storing and searching of multimodal content in the Vector Stores. Conclusion Traditionally, machine learning focused on specialised models for singular modalities. However, with innovations like OpenAI's GPT-4 Vision and Google's Vertex AI Gemini, a new era has dawned. As we embrace this era of multimodal AI, the vision of interconnected learning envisioned by Comenius becomes a reality. Spring AI's Message API facilitates the integration of multimodal LLMs, enabling developers to create innovative solutions. By leveraging these models, applications can comprehend and respond to data in various forms, unlocking new possibilities for AI-driven experiences.

人类以多种方式处理知识,同时跨越多种数据输入模式。机器学习的方法过去通常专注于处理单一模态的模型,但现在出现了一波新的多模态大型语言模型。这些模型能够接受多种输入,包括文本、图像、音频和视频,并通过整合这些输入生成文本响应。Spring AI的多模态能力使其能够同时理解和处理来自各种来源的信息。Spring AI的消息API支持多模态大型语言模型的集成,开发人员可以利用这些模型创建创新的解决方案。

Spring AI - Multimodality - Orbis Sensualium Pictus
相关推荐 去reddit讨论