标签

 postgresql 

相关的文章:

解道jdon.com -

PostgreSQL正在考虑将基于进程的模型迁移到线程模型

PostgreSQL 在大型系统上的 扩展性 不佳,主要是因为基于进程的模型都消耗了资源。 并非所有数据库都有这个问题,如果不进行某种重大的 架构 更改,PostgreSQL 就无法停止这个问题。 PostgreSQL 数据库系统,它的历史可以追溯到1986 年。对具有如此悠久历史的大型代码库进行根本性更改从来都不是一件容易的事。随着根本性变化的发生,使 PostgreSQL 摆脱面向过程的模型并不是一件小事,但该项目正在认真考虑这一变化。 PostgreSQL 实例作为一大组协作进程运行,其中每个连接的客户端都有一个进程。这些进程使用复杂的库通过多个共享内存区域进行通信,该库允..

AI生成摘要 PostgreSQL的扩展性不佳,因为基于进程的模型消耗资源。开发人员考虑将其迁移到线程模型,以更好地共享状态和减少开发成本。然而,这个转变面临挑战,如全局变量的处理和性能损失。讨论结果显示,大多数开发人员认为这种变化是好的,但目前没有人愿意投入时间来推动这一努力。因此,在可预见的未来内,不会转向线程模型。

相关推荐 去reddit讨论

解道jdon.com -

PostgreSQL和Oracle物化视图比较

对于最终用户来说,物化视图基本上只是一个表,物化视图只是将结果 缓存 在磁盘上,这样就不需要每次都运行底层查询。 您可以使用类似的方法为分析师设置一些历史销售数据,他们不需要实时信息,只需要最近 5 年的销售额。它可能会占用大量磁盘空间,但最终与对生产数据运行实时查询相比,这在服务器上会更容易。 以下几个是物化视图几个特点: 查询优化 - 对于物化视图,由于数据是预先计算和存储的,优化器可以直接使用它,并且某些查询可以更快。 写入操作可能会增加开销 - 物化视图通常是只读的,基于数据快照。因此,对于提交刷新,将基表的更改同步到 mview 通常比动态视图更复杂。但是,如果您不..

AI生成摘要 物化视图是将结果缓存在磁盘上的表,可以用于优化查询和提高性能。物化视图具有一些特点,如查询优化、写入操作开销增加、存储空间增加、数据一致性、索引注意事项、分区等。在Oracle中,物化视图还具有自动刷新、按计划刷新、查询重写、快速刷新、多行约束、缓存连接、过时容忍、同步物化视图组等功能。与SQL Server相比,Oracle中的物化视图需要定期刷新,而SQL Server中的物化视图会保持最新状态。

相关推荐 去reddit讨论

Planet PostgreSQL -

Grant Fritchey: Querying PostgreSQL: Learning PostgreSQL with Grant

Writing queries to retrieve the data from a database is probably the single most common task when it comes to working with data. Working with data in PostgreSQL is no exception. Further, PostgreSQL has an incredibly rich, wide, and varied set of mechanisms for retrieving data. From standard SELECT… FROM… WHERE to windowing functions and recursive queries, PostgreSQL has it all. I honestly can’t do it justice in a single article. Further, since so much of this functionality is effectively identical to where I’m more comfortable, SQL Server, I’m not turning this into a PostgreSQL 101 on the SELECT statement. Instead, for this series, I’m just going to assume I may have more than one article on querying PostgreSQL. For this entry in the series, I’m going to focus on the core behaviors of SELECT, FROM and WHERE with an emphasis on what’s different from SQL Server. This won’t be a fundamental how-to on querying PostgreSQL, but instead an exploration of the gotchas you’re likely to experience coming in with existing knowledge of how you think these things should work. And hoo boy, there’s some fun stuff in there. Let’s get stuck in. In the sample database I’ve created as a part of this ongoing series, I created a couple of schemas and organized my tables within them. If you wish to execute the code or look at the data structures, the code is in my ScaryDBA/LearningPostgreSQL repository here. The objects and database you will need can be created/reset using the CreateDatabase.sql script, then adding sample data using the SampleData.sql script. After executing that script, execute the Sample The rest of the code in this article is in the  10_Select folder. FROM I actually love how the PostgreSQL document defines what you’re doing in the FROM clause: Trivial table expressions simply refer to a table on disk, a so-called base table, but more complex expressions can be used to modify or combine base tables in various ways. While I wouldn’t myself define it this way, I find it to be interes[...]

AI生成摘要 这篇文章主要讲述了在PostgreSQL中使用SELECT、FROM和WHERE的核心行为。在FROM子句中,可以使用简单的表达式引用基本表,也可以使用复杂的表达式修改或组合基本表。在JOIN操作中,使用USING子句可以根据列名和数据类型自动确定JOIN条件,而不需要显式定义。此外,还介绍了LATERAL子句和WHERE子句的用法。在SELECT子句中,可以指定要返回的列,并使用别名使代码更清晰。还介绍了一些特殊的函数和子句,如LIMIT和FETCH。总的来说,PostgreSQL在数据检索方面具有丰富的功能和灵活的语法。

相关推荐 去reddit讨论

Planet PostgreSQL -

Jobin Augustine: How To Measure the Network Impact on PostgreSQL Performance

It is very common to see many infrastructure layers standing between a PostgreSQL database and the Application server.  The most common ones are connection poolers, load balancers, routers, firewalls, etc.  We often forget or take for granted the network hops involved and the additional overhead it creates on the overall performance. But it could cause severe performance penalties in many cases and overall throughput deterioration. I have been trying to get a good estimate of this overhead for quite some time. Previously I had written about how the volume of data transmission as part of SQL execution, as well as the cursor location, affects the overall performance. Meanwhile, Hans-Jürgen Schönig’s presentation, which brought up the old discussion of Unix socket vs. TCP/IP connection, triggered me to write about other aspects of network impact on performance. He demonstrated a specific case of a 2x performance degradation while using TCP/IP connection. How to detect and measure the impact There is no easy mechanism for measuring the impact of network overhead. But a very close analysis of wait_events from pg_stat_activity can tell us the story as closely as possible. So we should be sampling the wait events. Many methods exist for wait-event sampling, including extensions. But I prefer not to install special tools or extensions on the user environment for the wait event sampling. At Percona Support, we use pg_gather as the method to collect and study the wait events because it is a standalone SQL script and doesn’t need to install anything on the database systems. It is designed to be very lightweight as well. There will be 2,000 samples collected per session. pg_gather analysis report can show wait events and other information associated with each session. But I will be discussing and highlighting only the wait events portion of it in this blog while going through different types types of workloads and how network performance shows up in wait events. Case 1: Query retrieving a large[...]

AI生成摘要 在PostgreSQL数据库和应用服务器之间经常存在许多基础设施层,最常见的是连接池、负载均衡器、路由器、防火墙等。我们经常忽视或者认为网络跳数和额外开销对整体性能的影响是理所当然的。但在许多情况下,这可能导致严重的性能惩罚和整体吞吐量下降。我一直在努力估计这种开销已经有一段时间了。之前我曾写过关于SQL执行中数据传输量和游标位置如何影响整体性能的文章。 与此同时,Hans-Jürgen Schönig的演讲引发了关于Unix套接字与TCP/IP连接的旧讨论,这促使我写下了关于网络对性能的其他方面的文章。他演示了使用TCP/IP连接时性能下降2倍的特定案例。 如何检测和测量影响 没有简单的机制来测量网络开销的影响。但是通过对pg_stat_activity中的wait_events进行非常详细的分析,我们可以尽可能地了解情况。因此,我们应该对等待事件进行采样。有许多方法可以进行等待事件采样,包括扩展。但是我不喜欢在用户环境中安装特殊的工具或扩展来进行等待事件采样。在Percona Support中,我们使用pg_gather作为收集和研究等待事件的方法,因为它是一个独立的SQL脚本,不需要在数据库系统上安装任何东西。它的设计也非常轻量级。每个会话将收集2,000个样本。 pg_gather分析报告可以显示与每个会话相关的等待事件

相关推荐 去reddit讨论

Percona Database Performance Blog -

How To Measure the Network Impact on PostgreSQL Performance

It is very common to see many infrastructure layers standing between a PostgreSQL database and the Application server.  The most common ones are connection poolers, load balancers, routers, firewalls, etc.  We often forget or take for granted the network hops involved and the additional overhead it creates on the overall performance. But it could cause […]

AI生成摘要 在PostgreSQL数据库和应用服务器之间经常会有许多基础设施层,如连接池、负载均衡器、路由器、防火墙等。我们经常忽视或认为网络跳跃和额外开销对整体性能的影响是理所当然的。但在许多情况下,这可能导致严重的性能损失和整体吞吐量下降。我一直在努力估计这种开销已经有一段时间了。之前我曾写过关于数据传输量和游标位置对整体性能的影响的文章。 与此同时,Hans-Jürgen Schönig的演讲引发了关于Unix套接字与TCP/IP连接的旧讨论,这促使我写下了关于网络对性能的其他方面的文章。他演示了使用TCP/IP连接时性能下降2倍的特定情况。 如何检测和测量影响 没有简单的机制来测量网络开销的影响。但是,通过对pg_stat_activity中的wait_events进行非常详细的分析,我们可以尽可能地了解情况。因此,我们应该对等待事件进行采样。有许多方法可以进行等待事件采样,包括扩展。但是,我不喜欢在用户环境中安装特殊的工具或扩展来进行等待事件采样。在Percona Support中,我们使用pg_gather作为收集和研究等待事件的方法,因为它是一个独立的SQL脚本,不需要在数据库系统上安装任何东西。它的设计也非常轻量级。每个会话将收集2,000个样本。 pg_gather分析报告可以显示与每个会话相关的等待事件和其他信息。

相关推荐 去reddit讨论

Planet PostgreSQL -

oded valin: My Oracle to PostgreSQL Migration: The 7 Tools That Made It Possible

I have a confession to make: I'm a huge fan of Oracle. I absolutely adore the company and its technology. Admittedly, I'm not fond of their pricing practices, but overall, they offer excellent products. As an Oracle DBA for a major energy infrastructure company in California, I've spent years mastering Oracle databases. However, a recent decision from our executive team prompted a transition to PostgreSQL. This shift transformed me from an experienced Oracle DBA into a newcomer in the world of PostgreSQL. Having navigated through this migration, I feel compelled to share the insights I've gained along the way. This article details my journey, the challenges encountered, and the seven indispensable tools that facilitated this transition. My hope is that sharing these experiences will make your journey to PostgreSQL smoother. Overview of the Migration Process Migrating from Oracle to PostgreSQL isn't just a flip of a switch. It's a journey with a bunch of steps like schema conversion, data migration, application migration, and performance tuning. Each stage had its own hiccups, and I needed a toolbox of solutions to handle them. Ora2Pg Ora2Pg was my first ally. It's an open-source tool that converts Oracle database schemas into PostgreSQL format. As Ora2Pg is an open source project, you can see the popularity of the tool: Stars Commits Number of releases Used By License Lead contributor Ora2Pg 890 2,775  22 n/a GPL-3.0 license Gilles Darold Pros: Can handle a ton of Oracle objects Configurable through configuration files Limitations: Complex PL/SQL conversions might need manual intervention Large databases can take a long time to convert [...]

AI生成摘要 这篇文章讲述了作者从Oracle迁移到PostgreSQL的经历,并介绍了七个不可或缺的工具。首先是Ora2Pg,它可以将Oracle数据库模式转换为PostgreSQL格式。然后是AWS Database Migration Service (DMS),它帮助作者以最小的停机时间迁移数据。接下来是pgLoader,它是一个用于快速加载数据的工具。然后是Foreign Data Wrappers (FDW),它允许在PostgreSQL中管理其他数据库的数据。此外,还介绍了pg_dump、pg_restore和其他内置的PG功能,用于数据备份和恢复。最后是EverSQL,它可以优化SQL查询。最后是Npgsql,它是一个用于在.NET应用程序中连接PostgreSQL的数据提供程序。文章还列举了从Oracle迁移到PostgreSQL时可能遇到的常见问题,如SQL语法和功能的差异、事务行为的不同、大小写敏感性、序列和自增列、数据类型以及NULL和空字符串的处理。作者希望这些工具和经验可以帮助读者顺利进行迁移。

相关推荐 去reddit讨论

Planet PostgreSQL -

Sergey Pronin: Deploy PostgreSQL on Kubernetes Using GitOps and ArgoCD

In the world of modern DevOps, deployment automation tools have become essential for streamlining processes and ensuring consistent, reliable deployments. GitOps and ArgoCD are at the cutting edge of deployment automation, making it easy to deploy complex applications and reducing the risk of human error in the deployment process. In this blog post, we will explore how to deploy the Percona Operator for PostgreSQL v2 using GitOps and ArgoCD. The setup we are looking for is the following: Teams or CICD roll out the manifests to Github ArgoCD reads the changes and compares the changes to what we have in Kubernetes ArgoCD creates/modifies Percona Operator and PostgreSQL custom resources Percona Operator takes care of day-1 and day-2 operations based on the changes pushed by ArgoCD to custom resources Prerequisites: Kubernetes cluster GitHub repository. You can find my manifests here. Start it up Deploy and prepare ArgoCD ArgoCD has quite detailed documentation explaining the installation process. I did the following: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml Expose the ArgoCD server. You might want to use ingress or some other approach. I’m using a Load Balancer in a public cloud: kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}' Get the Load Balancer endpoint; we will use it later: kubectl -n argocd get svc argocd-server NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE argocd-server LoadBalancer 10.88.1.239 123.21.23.21 80:30480/TCP,443:32623/TCP 6h28m I’m not a big fan of Web User Interfaces, so I took the path of using argocd CLI. Install it by following the CLI installation documentation. Retrieve the admin password to log in using the CLI: argocd admin initial-password -n argocd Login to the server. Use the Load Balancer e[...]

AI生成摘要 在现代DevOps世界中,部署自动化工具已成为优化流程和确保一致可靠部署的必备工具。GitOps和ArgoCD处于部署自动化的前沿,可以轻松部署复杂应用程序,并减少部署过程中的人为错误风险。本文将探讨如何使用GitOps和ArgoCD部署Percona Operator for PostgreSQL v2。 我们需要的设置如下: 团队或CICD将清单发布到Github ArgoCD读取更改并将其与Kubernetes中的内容进行比较 ArgoCD创建/修改Percona Operator和PostgreSQL自定义资源 Percona Operator根据ArgoCD推送到自定义资源的更改处理day-1和day-2操作 先决条件: Kubernetes集群GitHub存储库。您可以在这里找到我的清单。 启动它 部署和准备ArgoCD ArgoCD有非常详细的文档,解释了安装过程。我执行了以下操作: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml 暴露ArgoCD服务器。您可能希望使用Ingress或其他方法。我在公共云中使用负载均衡器: kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}' 获取负载均衡器端点;我们稍后会用到它: kubectl -n argocd get svc argocd-server 我不是Web用户界面的忠实粉丝,所以我选择使用argocd CLI。按照CLI安装文档进行安装。 检索管理员密码以使用CLI登录: argocd admin initial-password -n argocd 登录到服务器。使用上面的负载均衡器端点: argocd login 123.

相关推荐 去reddit讨论

Percona Database Performance Blog -

Deploy PostgreSQL on Kubernetes Using GitOps and ArgoCD

In the world of modern DevOps, deployment automation tools have become essential for streamlining processes and ensuring consistent, reliable deployments. GitOps and ArgoCD are at the cutting edge of deployment automation, making it easy to deploy complex applications and reducing the risk of human error in the deployment process. In this blog post, we will explore how […]

AI生成摘要 在现代DevOps世界中,部署自动化工具已成为优化流程和确保一致可靠部署的必备工具。GitOps和ArgoCD处于部署自动化的前沿,可以轻松部署复杂应用程序,并减少部署过程中的人为错误风险。本文将探讨如何使用GitOps和ArgoCD部署Percona Operator for PostgreSQL v2。 我们需要的设置如下: 团队或CICD将清单发布到Github ArgoCD读取更改并将其与Kubernetes中的内容进行比较 ArgoCD创建/修改Percona Operator和PostgreSQL自定义资源 Percona Operator根据ArgoCD推送到自定义资源的更改处理第1天和第2天的操作 先决条件: Kubernetes集群GitHub存储库。您可以在这里找到我的清单。 启动它 部署和准备ArgoCD ArgoCD有非常详细的文档,解释了安装过程。我执行了以下操作: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml 暴露ArgoCD服务器。您可能希望使用Ingress或其他方法。我在公共云中使用负载均衡器: kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}' 获取负载均衡器端点;我们稍后会用到它: kubectl -n argocd get svc argocd-server 我不是Web用户界面的忠实粉丝,所以我选择使用argocd CLI。按照CLI安装文档进行安装。 检索管理员密码以使用CLI登录: argocd admin initial-password -n argocd 登录到服务器。使用上面的负载均衡器端点: argocd login 123

相关推荐 去reddit讨论

Planet PostgreSQL -

muhammad ali: Understanding the CREATEROLE Privilege in PostgreSQL

This blog is about the CREATEROLE privilege and the new enhancements in PostgreSQL 16, part of the fine access controls for database security The post Understanding the CREATEROLE Privilege in PostgreSQL appeared first on Stormatics.

AI生成摘要 CREATEROLE权限允许用户添加、删除和修改其他角色,但不能创建或修改超级用户。它也不能授予或撤销REPLICATION权限,创建REPLICATION用户,编辑这些用户的角色属性或修改REPLICATION权限。具有CREATEROLE权限的用户可以访问所有预定义的系统角色,并可以授予或撤销成员资格,即使对于它们尚未具有任何访问权限的角色。然而,授予用户CREATEROLE权限可能会使系统面临风险。在PostgreSQL 16中,具有CREATEROLE权限的用户不再能够授予角色的成员资格,而只能授予具有ADMIN OPTION的角色的成员资格。WITH ADMIN OPTION权限允许用户授予角色的成员资格给其他用户,撤销角色的成员资格,以及对角色进行评论,但不能删除角色。在PostgreSQL 15及更早版本中,超级用户可以创建角色并将其授予其他用户,而在PostgreSQL 16中,具有CREATEROLE权限的用户无法授予角色的成员资格给其他用户,除非具有ADMIN OPTION。因此,在授予CREATEROLE权限时需要谨慎,确保只有可靠的人或角色拥有该权限。

相关推荐 去reddit讨论

Planet PostgreSQL -

Marco Slot: Citus 12: Schema-based sharding for PostgreSQL

What if you could automatically shard your PostgreSQL database across any number of servers and get industry-leading performance at scale without any special data modelling steps? Our latest Citus open source release, Citus 12, adds a new and easy way to transparently scale your Postgres database: Schema-based sharding, where the database is transparently sharded by schema name. Schema-based sharding gives an easy path for scaling out several important classes of applications that can divide their data across schemas: Multi-tenant SaaS applications Microservices that use the same database Vertical partitioning by groups of tables Each of these scenarios can now be enabled on Citus using regular CREATE SCHEMA commands. That way, many existing applications and libraries (e.g. django-tenants) can scale out without any changes, and developing new applications can be much easier. Moreover, you keep all the other benefits of Citus, including distributed transactions, reference tables, rebalancing, and more. In this blog post, you’ll get a high-level overview of schema-based sharding and other new Citus 12 features: What is schema-based sharding? How to use Citus schema-based sharding for Postgres Benefits of schema-based sharding Choosing a sharding model for multi-tenant applications (schema-based vs. row-based) Migrating an existing schema-per-tenant application to Citus MERGE improvements Even more details available in the release notes on the 12.0 Updates page. And if you want to see demos of some of this functionality, be sure to join us for the livestream of the Citus 12.0 release party on Wed 02 Aug (mark your calendar and join us. Let’s dive in! What is schema-based sharding? Schema-based sharding means that tables from the same schema are placed on the same node, while different schemas may be on different nodes. That way, queries and transactions that involve a single schema can always be evaluate[...]

AI生成摘要 Citus 12, the latest open source release, introduces schema-based sharding for PostgreSQL databases. This feature allows for transparent scaling of databases by dividing data across schemas. It is particularly useful for multi-tenant SaaS applications, microservices, and vertical partitioning scenarios. Schema-based sharding can be enabled by creating distributed schemas using regular CREATE SCHEMA commands. This feature provides benefits such as easy shard management, automatic rebalancing, data sharing across schemas, and more. Schema-based sharding can be used alongside row-based sharding for different workload patterns. The text also discusses the benefits and considerations of schema-based sharding, as well as how to use it in Citus 12. Additionally, it mentions the MERGE command and its compatibility with schema-based sharding in Citus 12.

相关推荐 去reddit讨论