标签

 mongodb 

相关的文章:

探索MongoDB与人工智能相结合的创新应用,改变保险和金融服务行业,提升工厂生产效率,以及使用Google Cloud和MongoDB实现商业创新。

Percona Database Performance Blog

Percona Database Performance Blog -

Benchmarking MongoDB Performance on Kubernetes

Cloud-native databases are becoming the norm, and containerized databases are a common trend (see the report from Dynatrace and Figure 1). Kubernetes—the de facto standard for platform engineers—and operators simplify database deployment and management. But what are the performance implications of running databases in Kubernetes? To answer this question, we compared the performance of Percona Server for […]

云原生数据库成为常态,容器化数据库成为常见趋势。本文比较了裸金属服务器上的Percona Server for MongoDB与Kubernetes上的Percona Operator for MongoDB的性能。结果显示,在Kubernetes中运行没有性能损失。使用多种硬件配置和全面的测试,结果一致表明两个环境的性能水平相当。这表明,Kubernetes在Percona Operator for MongoDB的帮助下,能够托管云原生数据库而不影响效率和速度。

相关推荐 去reddit讨论
Devart Blog

Devart Blog -

Devart Rolls Out Python Connectors for Microsoft Access, Snowflake, and MongoDB

We're thrilled to introduce our new offerings: Python connectors for Microsoft Access, Snowflake, and MongoDB. These products mark a significant leap forward in enhancing data connectivity and analysis within Python applications.  The post Devart Rolls Out Python Connectors for Microsoft Access, Snowflake, and MongoDB appeared first on Devart Blog.

Devart发布了Python连接器,支持Microsoft Access、Snowflake和MongoDB,提升Python应用程序的数据连接和分析能力。连接器提供跨平台支持、轻松连接、灵活的数据格式、多种文件格式、灵活的查询、数据可视化、提高生产力和性能。用户可以下载连接器开始他们的数据探索之旅。

相关推荐 去reddit讨论
MongoDB

MongoDB -

Building AI With MongoDB: Integrating Vector Search And Cohere to Build Frontier Enterprise Apps

Cohere is the leading enterprise AI platform, building large language models (LLMs) which help businesses unlock the potential of their data. Operating at the frontier of AI, Cohere’s models provide a more intuitive way for users to retrieve, summarize, and generate complex information. Cohere offers both text generation and embedding models to its customers. Enterprises running mission-critical AI workloads select Cohere because its models offer the best performance-cost tradeoff and can be deployed in production at scale. Cohere’s platform is cloud-agnostic. Their models are accessible through their own API as well as popular cloud managed services, and can be deployed on a virtual private cloud (VPC) or even on-prem to meet companies where their data is, offering the highest levels of flexibility and control. Cohere’s leading Embed 3 and Rerank 3 models can be used with MongoDB Atlas Vector Search to convert MongoDB data to vectors and build a state-of-the-art semantic search system. Search results also can be passed to Cohere’s Command R family of models for retrieval augmented generation (RAG) with citations. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. A new approach to vector embeddings It is in the realm of embedding where Cohere has made a host of recent advances. Described as “AI for language understanding,” Embed is Cohere’s leading text representation language model. Cohere offers both English and multilingual embedding models, and gives users the ability to specify the type of data they are computing an embedding for (e.g., search document, search query). The result is embeddings that improve the accuracy of search results for traditional enterprise search or retrieval-augmented generation. One challenge developers faced using Embed was that documents had to be passed one by one to the model endpoint, limiting throughput when dealing with larger data sets. To address that challenge and improve developer experience, Cohere has recently announced its new Embed Jobs endpoint. Now entire data sets can be passed in one operation to the model, and embedded outputs can be more easily ingested back into your storage systems. Additionally, with only a few lines of code, Rerank 3 can be added at the final stage of search systems to improve accuracy. It also works across 100+ languages and offers uniquely high accuracy on complex data such as JSON, code, and tabular structure. This is particularly useful for developers who rely on legacy dense retrieval systems. Demonstrating how developers can exploit this new endpoint, we have published the How to use Cohere embeddings and rerank modules with MongoDB Atlas tutorial. Readers will learn how to store, index, and search the embeddings from Cohere. They will also learn how to use the Cohere Rerank model to provide a powerful semantic boost to the quality of keyword and vector search results. Figure 1: Illustrating the embedding generation and search workflow shown in the tutorial Why MongoDB Atlas and Cohere? MongoDB Atlas provides a proven OLTP database handling high read and write throughput backed by transactional guarantees. Pairing these capabilities with Cohere’s batch embeddings is massively valuable to developers building sophisticated gen AI apps. Developers can be confident that Atlas Vector Search will handle high scale vector ingestion, making embeddings immediately available for accurate and reliable semantic search and RAG. Increasing the speed of experimentation, developers and data scientists can configure separate vector search indexes side by side to compare the performance of different parameters used in the creation of vector embeddings. In addition to batch embeddings, Atlas Triggers can also be used to embed new or updated source content in real time, as illustrated in the Cohere workflow shown in Figure 2. Figure 2: MongoDB Atlas Vector Search supports Cohere’s batch and real time workflows. (Image courtesy of Cohere) Supporting both batch and real-time embeddings from Cohere makes MongoDB Atlas well suited to highly dynamic gen AI-powered apps that need to be grounded in live, operational data. Developers can use MongoDB’s expressive query API to pre-filter query predicates against metadata, making it much faster to access and retrieve the more relevant vector embeddings. The unification and synchronization of source application data, metadata, and vector embeddings in a single platform, accessed by a single API, makes building gen AI apps faster, with lower cost and complexity. Those apps can be layered on top of the secure, resilient, and mature MongoDB Atlas developer data platform that is used today by over 45,000 customers spanning startups to enterprises and governments handling mission-critical workloads. What's next? To start your journey into gen AI and Atlas Vector Search, review our 10-minute Learning Byte. In the video, you’ll learn about use cases, benefits, and how to get started using Atlas Vector Search.

Cohere是一家领先的企业AI平台,提供大型语言模型(LLMs)和文本生成和嵌入模型。其模型在性能和成本之间提供最佳平衡,并可在生产环境中大规模部署。Cohere的平台与云无关,可通过API和流行的云托管服务访问,并可部署在VPC或本地。Cohere的Embed 3和Rerank 3模型可与MongoDB Atlas Vector Search一起使用,构建先进的语义搜索系统。

相关推荐 去reddit讨论
MongoDB

MongoDB -

Collaborating to Build AI Apps: MongoDB and Partners at Google Cloud Next '24

From April 9 to April 11, Las Vegas became the center of the tech world, as Google Cloud Next '24 took over the Mandalay Bay Convention Center—and the convention’s spotlight shined brightest on gen AI. Check out our AI resource page to learn more about building AI-powered apps with MongoDB. Between MongoDB’s big announcements with Google Cloud (which included an expanded collaboration to enhance building, scaling, and deploying GenAI applications using MongoDB Atlas Vector Search and Vertex AI), industry sessions, and customer meetings, we offered in-booth lightning talks with leaders from four MongoDB partners—LangChain, LlamaIndex, Patronus AI, and Unstructured—who shared valuable insights and best practices with developers who want to embed AI into their existing applications or build new-generation apps powered by AI. Developing next-generation AI applications involves several challenges, including handling complex data sources, incorporating structured and unstructured data, and mitigating scalability and performance issues in processing and analyzing them. The lightning talks at Google Cloud Next ‘24 addressed some of these critical topics and presented practical solutions. One of the most popular sessions was from Harrison Chase, co-founder and CEO at LangChain, an open-source framework for building applications based on large language models (LLMs). Harrison provided tips on fixing your retrieval-augmented generation (RAG) pipeline when it fails, addressing the most common pitfalls of fact retrieval, non-semantic components, conflicting information, and other failure modes. Harrison recommended developers use LangChain templates for MongoDB Atlas to deploy RAG applications quickly. Meanwhile, LlamaIndex—an orchestration framework that integrates private and public data for building applications using LLMs—was represented by Simon Suo, co-founder and CTO, who discussed the complexities of advanced document RAG and the importance of using good data to perform better retrieval and parsing. He also highlighted MongoDB’s partnership with LlamaIndex, allowing for ingesting data into the MongoDB Atlas Vector database and retrieving the index from MongoDB Atlas via LlamaParse and LlamaCloud. Harrison Chase - LangChain Simon Suo - LlamaIndex Guillaume Nozière shed light on common mistakes made by RAG applications (such as hallucinations) and challenges related to catching those reliably at scale. His insights come from his work as a forward-deployed engineer at Patronus AI, an automated LLM evaluation platform that helps companies deploy gen AI applications confidently. While presenting, Guillaume also reiterated how critical it is for RAG systems to be built on top of reliable data platforms such as MongoDB Atlas. Last but not least, Unstructured’s Partnerships Manager Andrew Zane discussed the common issues that the heterogeneity of file types and document layouts can create when getting data LLM-ready. He also noted how Unstructured, a platform that connects any type of enterprise data with LLMs, can solve these issues and even enhance retrieval performance when used alongside MongoDB Atlas Vector Search. Guillaume Nozière - Patronus AI Andrew Zane - Unstructured Amidst so many booths, activities, and competing programming, a range of developers from across industries showed up to these insightful sessions, where they could engage with experts, ask questions, and network in a casual setting. They also learned how our AI partners and MongoDB work together to offer complementary solutions to create a seamless gen AI development experience. We are grateful for LangChain, LlamaIndex, Patronus AI, and Unstructured's ongoing partnership. We look forward to expanding our collaboration to help our joint customers build the next generation of AI applications. To learn more about building AI-powered apps with MongoDB, check out our AI Resources Hub and stop by our Partner Ecosystem Catalog to read about our integrations with these and other AI partners.

在谷歌云Next '24会议上,LangChain、LlamaIndex、Patronus AI和Unstructured等合作伙伴分享了嵌入AI和构建AI驱动应用程序的见解和最佳实践。他们讨论了处理复杂数据源、结构化和非结构化数据的挑战,以及处理和分析这些数据时的可扩展性和性能问题。他们介绍了使用LangChain模板快速部署RAG应用程序、使用LlamaIndex整合数据构建应用程序、使用Patronus AI进行可靠的LLM评估以及使用Unstructured连接企业数据和LLMs的解决方案。这些解决方案与MongoDB Atlas相互补充,为开发人员提供无缝的AI开发体验。

相关推荐 去reddit讨论
阿里云云栖号

阿里云云栖号 -

云原生最佳实践系列 5:基于函数计算 FC 实现阿里云 Kafka 消息内容控制 MongoDB DML 操作

在大数据 ETL 场景,将 Kafka 中的消息流转到其他下游服务是很常见的场景,除了常规的消息流转外,很多场景还需要基于消息体内容做判断,然后决定下游服务做何种操作。

该文章介绍了大数据ETL场景中使用Kafka将消息流转到其他下游服务的常见场景,通过判断消息内容对MongoDB进行DML操作,方案具有灵活性、可扩展性和完善的日志系统。文章还介绍了部署架构和产品介绍,但目前只支持阿里云Kafka。操作步骤包括环境搭建、配置MongoDB和函数计算FC,以及场景验证和资源释放。

相关推荐 去reddit讨论
MongoDB

MongoDB -

Transforming Industries with MongoDB and AI: Healthcare

This is the sixth in a six-part series focusing on critical AI use cases across several industries. The series covers the manufacturing and motion, financial services, retail, telecommunications and media, insurance, and healthcare industries. In healthcare, transforming data into actionable insights is vital for enhancing clinical outcomes and advancing patient care. From medical professionals improving care delivery to administrators optimizing workflows and researchers advancing knowledge, data is the lifeblood of the healthcare ecosystem. Today, AI emerges as a pivotal technology, with the potential to enhance decision-making, improve patient experiences, and streamline operations — and to do so more efficiently than traditional systems. Patient experience and engagement While they may not expect it based on past experiences, patients crave a seamless experience with healthcare providers. Ideally, patient data from healthcare services, including telehealth platforms, patient portals, wearable devices, and EHR, can be shared – securely – across interoperable channels. Unfortunately, disparate data sources, burdensome and time-consuming administrative work for providers, and overly complex and bloated solution stacks at the health system level all stand in the way of that friction-free experience. AI can synthesize vast amounts of data and provide actionable insights, leading to personalized and proactive patient care, automated administrative processes, and real-time health insights. AI technologies, such as machine learning algorithms, natural language processing, and chatbots, are being used to enhance and quantify interactions. Additionally, AI-powered systems can automatically schedule appointments, send notifications, and optimize clinic schedules, all reducing wait times for patients. AI-enabled chatbots and virtual health assistants provide 24/7 support, offering instant responses, medication reminders, and personalized health education. AI can even identify trends and predict health events, allowing for early intervention and reduction in adverse outcomes. MongoDB’s flexible data model can unify disparate data sources, providing a single view of the patient that integrates EHRs, wearable data, and patient-generated health data for personalized care and better patient outcomes. For wearables and medical devices, MongoDB is the ideal underlying data platform to house time series data, significantly cutting down on storage costs while enhancing performance. With Atlas for the Edge, synchronization with edge applications, including hospital-at-home setups, becomes seamless. On the patient care front, MongoDB can support AI-driven recommendations for personalized patient education and engagement based on the analysis of individual health records and engagement patterns, and Vector Search can power search functionalities within patient portals, allowing patients to easily find relevant information and resources, thereby improving the self-service experience. Enhanced clinical decision making Healthcare decision-making is critically dependent on the ability to aggregate, analyze, and act on an exponentially growing volume of data. From EHRs and imaging studies to genomic data and wearable device data, the challenge is not just the sheer volume but the diversity and complexity of data. Healthcare professionals need to synthesize information across various dimensions to make informed, real-time, accurate decisions. Interoperability issues, data silos, lack of data quality, and the manual effort required to integrate and interpret this data all stand in the way of better decision-making processes. The advent of AI technologies, particularly NLP and LLMs, offers transformative potential for healthcare decision-making by automating the extraction and analysis of data from disparate sources, including structured data in EHRs and unstructured text in medical literature or patient notes. By enabling the querying of databases using natural language, clinicians can access and integrate patient information more rapidly and accurately, enhancing diagnostic precision and personalizing treatment approaches. Moreover, AI can support real-time decision-making by analyzing streaming data from wearable devices, alerting healthcare providers to changes in patient conditions that require immediate attention. MongoDB, with its flexible data model and powerful data development platform, is uniquely positioned to support the complex data needs of healthcare decision-making applications. It can seamlessly integrate diverse data types, from FHIR-formatted clinical data to unstructured text and real-time sensor data, in a single platform. By integrating MongoDB with Large Language Models (LLMs), healthcare organizations can create intuitive, AI-enhanced interfaces for data retrieval and analysis. This integration not only reduces the cognitive load on clinicians but also enables them to access and interpret patient data more efficiently, focusing their efforts on patient care rather than navigating complex data systems. MongoDB's scalability ensures that healthcare organizations can manage growing data volumes efficiently, supporting the implementation of AI-driven decision support systems. These systems analyze patient data in real-time against extensive medical knowledge bases, providing clinicians with actionable insights and recommendations, thereby enhancing the quality and timeliness of care provided. MongoDB's Vector Search further enriches decision-making processes by enabling semantic search across vast datasets directly within the database. This integrated approach enables the application of pre-filters based on extensive metadata, enhancing the efficiency and relevance of search results without the need to synchronize with dedicated search engines or vector stores, meaning healthcare professionals can utilize previously undiscoverable insights, streamlining the identification of relevant information and patterns. Clinical trials and precision medicine The need for innovation and transformation isn’t just limited to the patient-provider-healthcare system experience. The challenges of conducting clinical trials and advancing precision medicine are significant, from identifying and enrolling suitable participants to data management practices are fraught with the potential for errors, compromising the accuracy and reliability of trial outcomes. Moreover, the traditional one-size-fits-all approach to treatment development fails to address the unique genetic makeup of individual patients, limiting the effectiveness of therapeutic interventions. AI can make clinical trials faster and treatments more personalized. It's like having a super-smart assistant that can quickly find the right people for studies, keep track of all the data without making mistakes, and even predict which medicines will work best for different people. This means doctors can create safe, efficient treatments that fit you perfectly, just like a tailor-made suit. Plus, with AI's help, these custom treatments can be developed quicker and be more affordable, bringing us closer to a future where everyone gets the care they need, designed just for them. It's a big step towards making medicine not just about treating sickness but about creating health plans that are as unique as patients are. MongoDB plays a pivotal role in modernizing clinical trials and advancing precision medicine by addressing complex data challenges. Its flexible data model excels in integrating diverse data types, from EHRs and genomic data to real-time patient monitoring streams. This capability is crucial for clinical trials and precision medicine, where combining various data sources is necessary, sometimes through a project purpose ODL, to develop a comprehensive understanding of patient health and treatment responses. For clinical trials, MongoDB can streamline participant selection by efficiently managing and querying vast datasets to identify candidates who meet specific criteria, significantly reducing the recruitment time. Its ability to handle large-scale, complex datasets in real-time also facilitates the dynamic monitoring of trial participants, enhancing the safety and accuracy of trials. Other notable use cases Patient Flow Optimization and Emergency Department Efficiency: AI algorithms can process historical and real-time data to forecast patient volumes, predict bed availability, and identify optimal staffing levels, enabling proactive resource allocation and patient routing. Virtual Health Assistants for Chronic Disease Management: Utilizing AI-powered virtual assistants to monitor patients' health status, provide personalized advice, and support medication adherence for chronic conditions such as diabetes and hypertension. AI-Enhanced Digital Pathology and Medical Imaging: Build modern VNA (Vendor Neutral Archive and Digital pathology solutions with innovative approaches, dealing with interoperable data, and manage extensive metadata associated with all your resources enabling fast findings and automated annotations. Operational Efficiency in Hospital Resource Management: Implementing AI to optimize hospital operations, from staff scheduling to inventory management, ensuring resources are used efficiently and patient care is prioritized. Learn more about AI use cases for top industries in our new ebook, How Leading Industries are Transforming with AI and MongoDB Atlas.

人工智能有潜力改变医疗行业,提升决策能力,改善患者体验和优化运营。它可以综合数据提供个性化护理,自动化行政流程,并提供实时健康见解。MongoDB的灵活数据模型可以整合不同的数据源,以获得更好的患者结果。自然语言处理和语言模型可以自动化数据提取和分析,实现更快速、更准确的决策。MongoDB的可扩展性和向量搜索进一步增强了决策过程。人工智能还可以加快临床试验和个性化治疗,推进精准医学。MongoDB在解决复杂数据挑战方面在现代化临床试验中发挥了关键作用。其他值得注意的用例包括患者流程优化、虚拟健康助手和医院资源管理的运营效率。

相关推荐 去reddit讨论
六虎

六虎 -

Mongodb支持事务吗?

一、概念 1.1、MongoDB事务简介 MongoDB 是一个非关系型数据库管理系统,最初并不支持事务。然而,随着时间的推移,MongoDB 在其4.0版本中引入了多文档事务支持,使得在单个集合中执

MongoDB 4.0引入了对多文档事务的支持,可以在单个事务中执行多个操作,消除了许多实际应用场景中分布式事务的需求。然而,对于需要对多个文档进行读写的原子性的情况,MongoDB支持分布式事务。使用分布式事务需要设置副本集。提供了多文档事务和回滚/提交操作的示例。

相关推荐 去reddit讨论
MongoDB

MongoDB -

Retrieval Augmented Generation for Claim Processing: Combining MongoDB Atlas Vector Search and Large Language Models

Following up on our previous blog, AI, Vectors, and the Future of Claims Processing: Why Insurance Needs to Understand The Power of Vector Databases, we’ll pick up the conversation right where we left it. We discussed extensively how Atlas Vector Search can benefit the claim process in insurance and briefly covered Retrieval Augmented Generation (RAG) and Large Language Models (LLMs). MongoDB.local NYC Join us in person on May 2, 2024 for our keynote address, announcements, and technical sessions to help you build and deploy mission-critical applications at scale. Use Code Web50 for 50% off your ticket! Learn More One of the biggest challenges for claim adjusters is pulling and aggregating information from disparate systems and diverse data formats. PDFs of policy guidelines might be stored in a content-sharing platform, customer information locked in a legacy CRM, and claim-related pictures and voice reports in yet another tool. All of this data is not just fragmented across siloed sources and hard to find but also in formats that have been historically nearly impossible to index with traditional methods. Over the years, insurance companies have accumulated terabytes of unstructured data in their data stores but have failed to capitalize on the possibility of accessing and leveraging it to uncover business insights, deliver better customer experiences, and streamline operations. Some of our customers even admit they’re not fully aware of all the data in their archives. There’s a tremendous opportunity to leverage this unstructured data to benefit the insurer and its customers. Our image search post covered part of the solution to these challenges, opening the door to working more easily with unstructured data. RAG takes it a step further, integrating Atlas Vector Search and LLMs, thus allowing insurers to go beyond the limitations of baseline foundational models, making them context-aware by feeding them proprietary data. Figure 1 shows how the interaction works in practice: through a chat prompt, we can ask questions to the system, and the LLM returns answers to the user and shows what references it used to retrieve the information contained in the response. Great! We’ve got a nice UI, but how can we build an RAG application? Let’s open the hood and see what’s in it! Figure 1: UI of the claim adjuster RAG-powered chatbot Architecture and flow Before we start building our application, we need to ensure that our data is easily accessible and in one secure place. Operational Data Layers (ODLs) are the recommended pattern for wrangling data to create single views. This post walks the reader through the process of modernizing insurance data models with Relational Migrator, helping insurers migrate off legacy systems to create ODLs. Once the data is organized in our MongoDB collections and ready to be consumed, we can start architecting our solution. Building upon the schema developed in the image search post, we augment our documents by adding a few fields that will allow adjusters to ask more complex questions about the data and solve harder business challenges, such as resolving a claim in a fraction of the time with increased accuracy. Figure 2 shows the resulting document with two highlighted fields, “claimDescription” and its vector representation, “claimDescriptionEmbedding”. We can now create a Vector Search index on this array, a key step to facilitate retrieving the information fed to the LLM. Figure 2: document schema of the claim collection, the highlighted fields are used to retrieve the data that will be passed as context to the LLM Having prepared our data, building the RAG interaction is straightforward; refer to this GitHub repository for the implementation details. Here, we’ll just discuss the high-level architecture and the data flow, as shown in Figure 3 below: The user enters the prompt, a question in natural language. The prompt is vectorized and sent to Atlas Vector Search; similar documents are retrieved. The prompt and the retrieved documents are passed to the LLM as context. The LLM produces an answer to the user (in natural language), considering the context and the prompt. Figure 3: RAG architecture and interaction flow It is important to note how the semantics of the question are preserved throughout the different steps. The reference to “adverse weather” related accidents in the prompt is captured and passed to Atlas Vector Search, which surfaces claim documents whose claim description relates to similar concepts (e.g., rain) without needing to mention them explicitly. Finally, the LLM consumes the relevant documents to produce a context-aware question referencing rain, hail, and fire, as we’d expect based on the user's initial question. So what? To sum it all up, what’s the benefit of combining Atlas Vector Search and LLMs in a Claim Processing RAG application? Speed and accuracy: Having the data centrally organized and ready to be consumed by LLMs, adjusters can find all the necessary information in a fraction of the time. Flexibility: LLMs can answer a wide spectrum of questions, meaning applications require less upfront system design. There is no need to build custom APIs for each piece of information you’re trying to retrieve; just ask the LLM to do it for you. Natural interaction: Applications can be interrogated in plain English without programming skills or system training. Data accessibility: Insurers can finally leverage and explore unstructured data that was previously hard to access. Not just claim processing The same data model and architecture can serve additional personas and use cases within the organization: Customer Service: Operators can quickly pull customer data and answer complex questions without navigating different systems. For example, “Summarize this customer's past interactions,” “What coverages does this customer have?” or “What coverages can I recommend to this customer?” Customer self-service: Simplify your members’ experience by enabling them to ask questions themselves. For example, “My apartment is flooded. Am I covered?” or “How long do windshield repairs take on average?” Underwriting: Underwriters can quickly aggregate and summarize information, providing quotes in a fraction of the time. For example, “Summarize this customer claim history.” “I Am renewing a customer policy. What are the customer's current coverages? Pull everything related to the policy entity/customer. I need to get baseline info. Find relevant underwriting guidelines.” If you would like to discover more about Converged AI and Application Data Stores with MongoDB, take a look at the following resources: RAG for claim processing GitHub repository From Relational Databases to AI: An Insurance Data Modernization Journey Modernize your insurance data models with MongoDB and Relational Migrator

这篇文章讨论了如何将Atlas Vector Search和Large Language Models(LLMs)结合起来,构建一个Claim Processing RAG应用程序,以加快索赔处理的速度和准确性。通过将数据集中组织并准备好供LLMs使用,理赔员可以在短时间内找到所有必要的信息。LLMs可以回答各种问题,无需事先设计系统。应用程序可以用普通英语进行查询,无需编程技能或系统培训。此外,相同的数据模型和架构还可以为组织内的其他角色和用例提供服务。

相关推荐 去reddit讨论
MongoDB

MongoDB -

VertexAI and MongoDB for Intelligent Retail Pricing

In today’s competitive retail environment, the ability to quickly adjust pricing in response to market trends, consumer demand, and competitors’ moves is not just an advantage — it's essential for survival. This is where dynamic pricing comes into play, serving as a strategic tool for businesses to pull in their quest for market dominance. Dynamic pricing goes beyond changing numbers; it’s a strategic approach that reflects the dynamic nature of the market, powered by data-driven insights that enable prices to be adjusted in real-time for maximum effectiveness. This shift towards a more agile, data-driven pricing strategy underscores a broader trend in the business world: the recognition of data as a foundational element in decision-making processes. By leveraging real-time data, businesses can ensure their pricing strategies are not only responsive to market fluctuations but also strategically aligned with their overall business objectives, thus driving retail competitiveness to new heights. Let’s uncover how integrating both platforms empowers developers when it comes to delivering best-in-class, data-driven applications. MongoDB.local NYC Join us in person on May 2, 2024 for our keynote address, announcements, and technical sessions to help you build and deploy mission-critical applications at scale. Use Code Web50 for 50% off your ticket! Learn More Google Cloud: A platform for real-time analytics and AI Google Cloud stands out as a powerhouse in real-time analytics and artificial intelligence (AI), offering the infrastructure necessary for dynamic pricing strategies and other data-driven business approaches. It's designed to facilitate big data analysis, machine learning, and operational agility. Built-in tools form the backbone of an effective dynamic pricing strategy. These include Vertex AI for advanced machine learning models following best- in- class MLOps practices, and Pub/Sub for real-time messaging to solve real- time data ingestion. By harnessing the power of Google Cloud, retailers can analyze vast quantities of data in real-time, from current market trends to customer behavior and competitor pricing. This enables businesses to make informed decisions swiftly, adjusting their pricing strategies to reflect the ever-changing market conditions. MongoDB: Flexible data modeling and rapid application development MongoDB complements Google Cloud by offering a high performance document- based database with a flexible data model that allows rapid application development. For pricing data in particular, where there may be different variants for different sizes of store or country, the flexibility allows for the ease of storage of complex or hierarchical data. In addition, polymorphic capabilities allow you to use a single interface to represent different types, making your system more flexible. It also supports scalability as new types can be easily integrated. Lastly, it enhances efficiency by allowing the same operation to behave differently based on the object, reducing code redundancy. This flexible schema also enables seamless integration with AI models. MongoDB Atlas supports workload isolation, ensuring dedicated resources for AI tasks and smooth operation alongside core application workloads. Additionally, change streams and triggers can be utilized to capture real-time updates in the pricing data, allowing the AI model to be called upon for immediate analysis and adaptation and enabling in-app analytics for retailers to gain a competitive edge. Figure 1: MongoDB replica set: Workload Isolation In the dynamic pricing reference architecture, Atlas collections function as an ML feature store. By leveraging the capabilities of MongoDB Atlas as a developer data platform, we are able to embed real-time automated decision-making into our e-commerce applications and reduce operational overhead for both business operations and MLOps model fine-tuning. This is achieved through implementing a streamlined approach to data management, incorporating real-time, automated decision making, workload isolation, change streams, triggers for immediate updates, and seamless integration with AI models. Dynamic prising microservice overview Building an event-driven AI architecture leveraging MongoDB Atlas in Google Cloud is straightforward. We can summarize our dynamic pricing microservice by first describing the different components of its architecture, what they are used for, and how they interact with each other: Figure 2: Description of the different technology components of a dynamic pricing microservice and what they are used for. Handling data sources The proposed solution uses Google Cloud Pub/Sub to ingest data sources like customer behavior events in JSON format. Using a technology like Pub/Sub allows for scaling to handle a large number of messages and efficiently distribute them to many subscribers. This is partly because it allows for parallel processing of messages and can be distributed across multiple servers or instances. It is often a fundamental pattern in event-driven architectures, where the flow of the program is determined by events or messages, supporting reactive programming and making the system more responsive and efficient. Data federation We’ll use Vertex AI Notebooks to clean the data and train a TensorFlow model. This model will learn the non-linear relation between customer events, products names, and prices, enabling it to calculate the optimal predicted price. Orchestrating Using Cloud Functions, we orchestrate the customer events coming from the Pub/Sub topic to be converted into tensors, which are then stored in a MongoDB Atlas collection. This collection acts as a feature store serving as a centralized repository designed to store, manage, and serve features for machine learning (ML) models. Features represent individual measurable properties or characteristics used by ML models to make predictions or decisions. MongoDB’s document model flexibility paired with the document versioning pattern will allow us to design time-sensitive chunks of events and granularly manage the training datasets for our models. Serving The Cloud Function will use the event tensor to invoke our trained model that is served in a Vertex AI endpoint. The model will provide a predicted price score that can then be inserted into our product catalog stored in MongoDB so our e-commerce application can read the price change in real time. Dynamic pricing architecture: Putting it all together In the following architecture diagram, the blue data flow illustrates how customer event data is ingested into a Pub/Sub topic. This allows us to make a push subscription to a Cloud Function from the topic. This function orchestrates the data transformation from raw event into a tensor and calls an endpoint to then update the predicted price into our MongoDB product catalog collection. By using this architectural approach, we can isolate raw events threads and build different services around them, reacting in real time for dynamic pricing or asynchronously for model training. With every component loosely coupled, we prevent the system from crashing completely. Moreover, publishers and subscribers can continue to process their logic without the need for the other components to receive or publish messages. Figure 3: Dynamic pricing architecture integrating different Google Cloud components and MongoDB Atlas as a Feature Store For businesses, this translates into more precise and responsive pricing strategies. In the model building and optimization phase, by utilizing TensorFlow within Google Cloud Vertex AI notebooks, retailers can harness the power of deep learning capabilities. The neural network model is capable of analyzing intricate patterns and relationships within large datasets. This is how businesses may capture nuanced market dynamics, customer behavior, and pricing elasticity with greater accuracy, leading to more optimized pricing decisions. But even the best of the models should be consistently optimized. Maintaining model effectiveness requires continuous adaptation. Regularly evaluating accuracy and performing feature engineering ensures your models stay sensitive to market changes. This underscores the importance of retraining as a core principle in a continuous improvement data science approach. Using MongoDB Atlas as your operational data layer means that your feature store is always accessible, reducing downtime and improving the efficiency of machine learning operations. On the other hand, cross-region deployments can bring features closer to where machine learning models are being trained or served, reducing latency and improving model performance. Get started The integration of Google Cloud and MongoDB presents an easy approach to modernizing dynamic pricing strategies. Leveraging real-time analytics, flexible data modeling, and reactive microservices architecture, it empowers businesses to achieve operational efficiencies and gain a competitive advantage in their pricing strategies. For retailers looking to elevate their pricing strategies, considering a strategic partnership with both technologies is essential. For a deeper dive into integrating the different components of this architecture, make sure to check our GitHub repository. Check out our AI resource page to learn more about building AI-powered apps with MongoDB.

动态定价是根据市场趋势、消费者需求和竞争对手行动快速调整定价的策略工具。谷歌云和MongoDB的结合为实时分析和人工智能提供了强大的基础设施,帮助零售商分析市场趋势、客户行为和竞争对手定价,提高运营效率并获得竞争优势。

相关推荐 去reddit讨论
MongoDB

MongoDB -

The Journey of MongoDB with COVESA in the Connected Vehicle Landscape

There’s a popular saying: “If you want to go fast, go alone; if you want to go far, go together.” I would argue The Connected Vehicle Systems Alliance (COVESA) in partnership with their extensive member network, turns this saying on its head. They have found a way to go fast, together and also go far, together. COVESA is an industry alliance focused on enabling the widespread adoption of connected vehicle systems. This group aims to accelerate the development of these technologies through collaboration and standardization. It's made up of various stakeholders in the automotive and technology sectors, including car manufacturers, suppliers, and tech companies. COVESA’s collaborative approach allows members to accelerate progress. Shared solutions eliminate the need for individual members to reinvent the wheel. This frees up their resources to tackle new challenges, as the community collectively builds, tests, and refines foundational components. As vehicles become more connected, the data they generate explodes in volume, variety, and velocity. Cars are no longer just a mode of transportation, but a platform for advanced technology and data-driven services. This is where MongoDB steps in. MongoDB.local NYC Join us in person on May 2, 2024 for our keynote address, announcements, and technical sessions to help you build and deploy mission-critical applications at scale. Use Code Web50 for 50% off your ticket! Learn More MongoDB and COVESA As the database trusted for mission-critical systems by enterprises such as Cathay Pacific, Volvo Connect, or Cox Automotive; MongoDB has gained expertise in automotive, along with many other industries, building cross-industry knowledge in handling large-scale, diverse data sets. This in turn enables us to contribute significantly to vehicle applications and provide a unique view, especially in the data architecture discussions within COVESA. MongoDB solutions support these kinds of innovations, enabling automotive companies to leverage data for advanced features. One of the main features we provide is Atlas Device SDKs: a low-footprint, embedded database directly living on ECUs. It can synchronize data automatically with the cloud using Atlas Device Sync, our data transfer protocol that compresses the data handles conflict resolution, and only syncs delta changes, making it extremely efficient in terms of operations and maintenance. VSS: The backbone of connected vehicle data An important area of COVESA’s work is the Vehicle Signal Specification (VSS). VSS is a standardized framework used to describe data of a vehicle, such as speed, location, and diagnostic information. This standardization is essential for interoperability between different systems and components within a vehicle, as well as for external communication with other vehicles and infrastructure. VSS has been gaining more and more adoption, and it’s backed by ongoing contributions from BMW, Volvo Cars, Jaguar LR, Robert Bosch and Geotab, among others. MongoDB’s BSON and our Object-oriented Device SDKs uniquely position us to contribute to VSS implementation. The VSS data structured maps 1 to 1 to documents in MongoDB and objects in Atlas Device SDKs, which simplifies development, and speeds up applications by completely skipping any Relational Mapper layer. For every read or write, there is no need to transform the data between relational and VSS. Our insights into data structuring, querying, and management can help optimize the way data is stored and accessed in connected vehicles, making it more efficient and robust. Where MongoDB contributes MongoDB, within COVESA, finds its most meaningful contributions in areas where data complexities and community collaboration intersect. First, we can share insights into managing vast and varied data emerging from connected vehicles generating data on everything from engine performance to driver behavior. Second, we have an important role in supporting the standardization efforts, crucial for ensuring different systems within vehicles can communicate seamlessly. Our inputs can help ensure these standards are robust and practical, considering the real-world scenarios of data usage in vehicles. Some of our contributions include an Over the Air update architectural review presented at Troy COVESA’s AMM in October 2023; sharing insights about the Data Middleware PoC with BMW; and weekly contributions at the Data Expert Group. You can find some of our contributions on COVESA’s Wiki page. In essence, MongoDB's role in COVESA is about providing a unique perspective from the database management point of view, offering our understanding from different industries and use cases to support the developments towards more connected and intelligent vehicles. MongoDB, COVESA, and AWS together at CES2024 MongoDB’s most recent collaboration with COVESA was at the Consumer Electronics Show CES 2024 during which MongoDB’s Connected Vehicle solution was showcased. This solution leverages Atlas Device SDKs, such as the SDK for C++, which enables local data storage, in-vehicle data synchronization, and also uni and bi-directional data transfer with the cloud. Below is a schematic illustrating the integration of MongoDB within the software-defined vehicle: Schema 1: End to end integration for the connected vehicle At CES 2024, MongoDB also teamed up with AWS for a compelling presentation, "AI-powered Connected Vehicles with MongoDB and AWS" led by Dr. Humza Akhtar and Mohan Yellapantula, Head of Automotive Solutions & Go To Market at AWS. The session delved into the intricacies of building connected vehicle user experiences using MongoDB Atlas. It showcased the combined strengths of MongoDB's expertise and AWS's generative AI tools, emphasizing how Atlas Vector Search unlocks the full lifecycle value of connected vehicle data. During the event, MongoDB also engaged in a conversation with The Six Five, exploring various aspects of mobility, self-driving vehicles (SDVs), and the MongoDB and AWS partnership. This discussion extended to merging IT and OT, GenAI, Atlas Edger Server, and Atlas Device SDK. Going forward At the end of the road, it’s all about enhancing the end-user experience and providing unique value propositions. Defect diagnosis based on the acoustics of the engine, improved crash assistance with mobile and vehicle telemetry data, just-in-time food ordering while on the road, in-vehicle payments, and much, much more. What all these experiences have in common is the combination of interconnected data from different systems. At MongoDB, we are laser-focused on empowering OEMs to create, transform, and disrupt the automotive industry by unleashing the power of software and data. We enable this by: Partnering with alliances such as COVESA to build a strong ecosystem or collaboration. Having one single API for In-vehicle Data Storage, Edge to Cloud Synchronization, Time Series storage, and more, improves the developer experience. Focusing on having a robust, scalable, and secure suite of services trusted by tens of thousands of customers in more than 100 countries. Together with COVESA’s vision for connected vehicles, we’re driving a future where this industry is safer, more efficient, and seamlessly integrated into the digital world. The journey is just beginning. To learn more about MongoDB-connected mobility solutions, visit the MongoDB for Manufacturing & Motion webpage. Achieving fast, reliable and compressed data exchange is one of the pillars of Software Defined Vehicles, learn how MongoDB Atlas and Edge Server can help in this short demo.

COVESA是一个专注于连接车辆系统的行业联盟,与MongoDB合作并标准化以加快开发。MongoDB通过其处理大规模数据集的专业知识为COVESA做出贡献。他们提供Atlas Device SDK和支持车辆信号规范(VSS)。MongoDB的贡献包括数据管理洞察和支持标准化工作。MongoDB最近与AWS合作,在CES 2024展示了其连接车辆解决方案。他们旨在提升最终用户体验并赋予汽车行业的原始设备制造商更多权力。

相关推荐 去reddit讨论

热榜 Top10

LigaAI
LigaAI
eolink
eolink
Dify.AI
Dify.AI
观测云
观测云

推荐或自荐