标签

 security 

相关的文章:

了解Web3安全研究的最新进展,以及数据库安全的最佳实践。

The Verge

The Verge -

Google Pixel 8A leak reveals seven years of security updates

Image: Android Headlines We’ve already seen a bunch of images revealing what Google’s Pixel 8A might look like, but now, leaked marketing materials shared by Android Headlines give us an even better picture of what we can expect. The promotional images suggest that the budget-friendly Pixel 8A will come with seven years of security updates. But the materials don’t indicate whether the phone will come with seven years of Android OS upgrades, something its flagship Pixel 8 and 8 Pro have. Still, the seven years makes a big difference from Google’s previous policy for Pixel phones, which only provided five years of security updates and three years of Android OS upgrades. Image: Android Headlines There isn’t a mention of Android OS upgrades. ... Continue reading…

谷歌Pixel 8A将提供7年的安全更新,但不确定是否提供7年的Android操作系统升级。它将采用与Pixel 8和8 Pro相同的G3 Tensor处理器,并具备快速充电、可靠的全天电池续航、IP67防水以及谷歌的降噪功能。此外,Pixel 8A还将享受谷歌带来的一系列人工智能升级。

相关推荐 去reddit讨论
Spring

Spring -

相关推荐 去reddit讨论
绿盟科技技术博客

绿盟科技技术博客 -

RSAC 2024创新沙盒|RAD Security:云原生异常行为检测和响应新方案

RSAC 2024创新沙盒解读:RAD Security

RAD Security是一家云原生安全公司,提供实时Kubernetes安全态势管理和云原生身份威胁检测和响应等产品。其产品利用eBPF技术建立安全基线,防御0 Day攻击。RAD Security发布了Kubernetes物料清单和AI能力的实时KSPM以及补充RBAC管理能力的ITDR。

相关推荐 去reddit讨论
GitLab

GitLab -

A developer's guide to building an AI security governance framework

Artificial Intelligence (AI) has firmly established itself as a pillar of digital transformation, disrupting industries, increasing efficiency, and providing unmatched access to large data sets. AI also raises profound questions regarding security governance. How do I ensure I can leverage the best of what AI has to offer while mitigating its potential security risks? As AI continues to advance, there is a growing need for strong oversight and accountability. This article delves into the complex landscape of AI security governance, exploring various frameworks, strategies, and practices that organizations like GitLab are adopting to ensure the responsible development of AI technologies and features. Greater scrutiny on AI AI: Single term, numerous realities AI isn't a monolithic entity - it encompasses a spectrum of technologies and applications. From machine learning algorithms that power recommendation systems to advanced natural language processing models like Anthropic’s Claude 3, each AI system brings its unique set of opportunities and challenges. According to a 2023 MITRE report, three main areas of AI currently exist: AI as a subsystem <p></p><i>"AI is embedded in many software systems. Discrete AI models routinely perform machine perception and optimization functions, from face recognition in photos uploaded to the cloud, to dynamically allocating and optimizing network resources in 5G wireless networks. <p></p> "There are a wide range of vulnerabilities and threats against these types of AI subsystems – from data poisoning attacks to adversarial input attacks – that can be used to manipulate subsystems."</i><p></p> AI as human augmentation <p></p><i>"Another application of AI is in augmenting human performance, allowing a person to operate with much larger scope and scale. This has wide-ranging implications for workforce planning as AI has the potential to increase productivity and shift the composition of labor markets, similar to the role of automation in the manufacturing industry. <p></p> "While sophisticated hackers and military information operations can already generate believable content today using techniques such as computer-generated imagery, LLMs will make that capability available to anyone, while increasing the scope and scale at which the professionals can operate."</i><p></p> AI with agency <p></p><i>"A segment of the tech community is increasingly concerned about scenarios where sophisticated AI could operate as an independent, goal-seeking agent. While science fiction historically embodied this AI in anthropomorphic robots, the AI we have today is principally confined to digital and virtual domains. <p></p> "One scenario is an AI model given a specific adversarial agenda. Stuxnet is perhaps an early example of sophisticated, AI-fueled, goal-seeking malware with an arsenal of zero-day attacks that ended up escaping onto the internet."</i><p></p> You can focus your efforts in terms of security governance based on which areas your company is looking to adopt and the expected business benefits.<p></p> Frameworks for AI security governance For effective AI security governance, we must navigate the complex landscape of guidelines and principles developed by various organizations. Governments, international organizations, and tech companies have all played their part in shaping AI security governance frameworks. You can review the frameworks below and choose those that are relevant and/or apply to your organization: NIST AI Risk Management Framework (AI RMF) Google’s Security Artificial Intelligence Framework OWASP Top 10 for LLMs The UK’s NCSC Principles for the Security of Machine Learning While these frameworks provide valuable guidance, they also introduce complexity. Organizations must determine which apply to their AI usage and how they align to their practices. Moreover, the dynamic nature of AI requires continuous adaptation to stay secure. Something to note is that if you read through these frameworks, you’ll notice that numerous controls overlap with standard security best practices. This isn’t a coincidence. A strong overall security program is a prerequisite for proper AI security governance. How-to: AI security governance The why and the what AI security governance starts with understanding what AI technologies your organization is using or developing, why you are using them, and where these technologies fit into your operations. It's essential to define clear objectives and identify potential security risks associated with AI deployment. This introspection lays the foundation for effective AI security governance. The why Understanding the "why" behind each AI application is pivotal to build effective security governance. Each AI system deployed has to serve a specific purpose. Is AI being utilized to enhance customer experiences, automate manual tasks, or support the decision-making process? By uncovering the motivations driving AI initiatives, organizations can align these projects with their broader business objectives. This alignment ensures that AI investments are strategically focused, delivering value in line with organizational goals. It also aids in prioritizing AI systems that have a more significant impact on the core mission of the company. The what In the realm of AI security governance, the foundational step is conducting a comprehensive inventory of all AI systems, algorithms, and data sources within your organization. This includes meticulously cataloging all AI technologies in use, ranging from machine learning models and natural language processing algorithms to computer vision systems. This would also involve identifying the data sources feeding these AI systems, and their origins (internal databases, customer interactions, or third-party data providers). Such an inventory provides three main benefits: to gain a holistic understanding of the AI ecosystem within the organization to establish a strong basis for monitoring, auditing, and managing these assets effectively to focus security efforts on the high-risk/critical areas How to develop a security risk management program A robust security risk management program is at the core of responsible AI security governance. The critical building blocks for this program are the what and the why we discussed earlier. Specificities of AI make security risk management more complex. In the NIST AI RMF mentioned earlier, numerous challenges are highlighted, including: Difficult to measure AI-related security risks Potential security risks could emerge from the AI model, the software on which you are training the model, or the data ingested by the model. Different stages of the AI lifecycle might also trigger specific security risks depending on which actors (producers, developers, or consumers) are leveraging the AI solution. Risk tolerance threshold might be complex to determine As the potential security risks aren’t easily identifiable, determining the risk tolerance your organization can withstand regarding AI can be a very empirical exercise. Not considering AI in isolation Security governance of AI systems should be part of your security risk management strategy. Different users might have different parts of the overall picture. Ensuring you have complete information and full visibility into the AI lifecycle is critical to making the best decisions. Security risk management should be an ongoing process, adapting to the quickly evolving AI landscape. Reassessing the program, reviewing assumptions regarding the environment and involving additional business stakeholders are activities that should be happening on a regular basis. AI security governance and the GitLab DevSecOps platform Using AI to power DevSecOps Let’s take GitLab Duo, our suite of AI capabilities to help power DevSecOps workflows, as an example. GitLab Duo Code Suggestions helps developers write code more efficiently by using generative AI to assist in software engineering tasks. It works either through code completion or through code generation using natural language code comment blocks. To ensure it can be fully leveraged, security needs of potential users and customers have to be considered. As an example, data used to produce Code Suggestions is immediately discarded by the AI models. All of GitLab’s AI providers are subject to contractual terms with GitLab that prohibit the use of customer content for the provider’s own purposes, except to perform their independent legal obligations. GitLab’s own privacy policy prevents us from using customer data to train models without customer consent. Of course, to fully benefit from Code Suggestions, you should: understand and review all suggestions to see if they align with your development guidelines limit providing sensitive information or proprietary code in prompts ensure the suggestion follows the same secure coding guidelines your company has review the code using automated scanning for vulnerable dependencies, input validation and output sanitization, as well as license checks Securing AI Managing the output of AI systems is equally important as managing the input. Security scanning tools can help identify vulnerabilities and potential threats in AI-generated code. Managing AI output requires a systematic approach to code review and validation. Organizations should integrate security scanning tools into their CI/CD pipelines, ensuring that AI-generated code is checked for security vulnerabilities before deployment. Automated security checks can help detect vulnerabilities early in the development process, reducing the risk of potential vulnerable code stemming from suggested code blocks being merged. For any GitLab Duo generated code, changes are managed via merge requests which trigger your CI pipeline (including any security and code quality scanning you have configured). This ensures any governance rules you have set up for your merge requests like required approvals are enforced. AI systems are systems. Existing security controls apply to AI systems the same way they would apply to the rest of your environment. Common security controls around application security still apply, including security reviews, security scanning, threat modeling, encryption, etc. The Google Secure AI Framework highlights these six elements: expand strong security foundations to the AI ecosystem extend detection and response to bring AI into an organization’s threat universe automate defenses to keep pace with existing and new threats harmonize platform-level controls to ensure consistent security across the organization adapt controls to adjust mitigations and create faster feedback loops for AI deployment contextualize AI system risks in surrounding business processes If you have a strong security program, managing AI will be an extension of your current program and account for specific risks and vulnerabilities. How GitLab Duo is secured GitLab recognizes the significance of security in AI governance. Our very strong security program is focused on ensuring our customers can fully leverage GitLab Duo in a secure manner. This is how the security departments are collaborating to secure GitLab’s AI features GitLab: Security Assurance: Seeks to address our compliance requirements regarding security, that AI security risks are identified and properly managed, and that our customers understand how we secure our application, infrastructure, and services. Security Operations: Monitors our infrastructure and quickly responds to threats using a team of skilled engineers as well as automation capabilities, helping to ensure AI features aren’t abused or used in a malevolent manner. Product Security: Helps the product and engineering teams by providing security expertise for our AI features and helping to secure the underlying infrastructure on which our product is hosted. Corporate Security and IT Operations: Finds potential vulnerabilities in our product to proactively mitigate and support other departments by performing research on relevant security areas. Our Security team works closely with GitLab's Legal and Corporate Affairs team to ensure our framework for AI security governance is comprehensive. The recent launch of the GitLab AI Transparency Center showcases our commitment to implementing a strong AI governance. We published our AI ethics principles as well as our AI continuity plan to demonstrate our AI resiliency. Learn more AI security governance is a complex area, especially as the field is in a nascent form. As AI continues to support our workflows and accelerate our processes, responsible AI security governance becomes a key pillar of any security program. By understanding the nuances of AI, enhancing your risk management program, and using AI features that are developed responsibly, you can ensure that AI-powered workflows follow the principles of security, privacy, and trust. Learn more about GitLab Duo AI features.

人工智能(AI)是数字转型的支柱,提高效率并提供大数据访问。本文探讨了AI安全治理的复杂领域,以确保负责任的AI技术发展。

相关推荐 去reddit讨论
解道jdon.com

解道jdon.com -

Spring Boot 3中将JWT与Spring Security 6集成

在我们的 Spring Boot 应用程序中将 JWT(JSON Web 令牌)与 Spring Security 集成。这将使我们能够通过使用 JWT 整合强大的身份验证和授权机制来增强我们的安全框架。目标:确保只有使用有效的 JWT 令牌才能访问关键端点。在我们的 Spring Boot 项目中,我们主要有两个关键的 REST 端点:一个用于获取所有员工数据,另一个用于添加新员工,这些端点是安全的,并且需要基于 JWT 的身份验证才能进行访问。什么是JWT?JWT(即 JSON Web 令牌)就像数字通行证一样,有助于确保 Web 应用程序的安全。当有人登录应用程序时,服务器会向他们提供

本文介绍了在Spring Boot应用程序中将JWT(JSON Web令牌)与Spring Security集成的方法。JWT是一种安全令牌,用于增强应用程序的身份验证和授权机制。文章详细介绍了JWT的组成部分和生成过程,并解释了为什么选择使用JWT。此外,还介绍了将JWT与Spring Security集成的步骤,包括创建用户类、实现UserDetailsService接口、创建JwtService类和JwtFilter类,并配置SecurityConfig类来设置安全规则。通过将JWT与Spring Security集成,可以实现更安全、更快速和更易于管理的身份验证和授权机制。

相关推荐 去reddit讨论
Percona Database Performance Blog

Percona Database Performance Blog -

PostgreSQL Database Security Best Practices

When data is everything, the sophistication of cybersecurity threats casts a shadow over the world of data security, including for those using PostgreSQL as their database of choice. Although renowned for its reliability, flexibility, and strong feature set, in the face of relentless cyber-attacks, even users of PostgreSQL can find themselves in a situation where […]

保护PostgreSQL数据库免受威胁的最佳实践包括更新软件、加固服务器设置、加强认证安全、最小权限原则、数据加密、保护网络、审计日志和监控、备份和灾难恢复计划。Percona提供数据库安全评估服务。

相关推荐 去reddit讨论
绿盟科技技术博客

绿盟科技技术博客 -

相关推荐 去reddit讨论
绿盟科技技术博客

绿盟科技技术博客 -

RSAC 2024创新沙盒|未来防线:Harmonic Security在AI时代的数据守护

RSAC 2024创新沙盒解读:Harmonic Security

RSA Conference 2024 will open on May 6th. Harmonic Security, a finalist in RSAC Innovation Sandbox, helps enterprises adopt AI securely and protect sensitive data. Their solutions enhance AI visibility, provide efficient data protection, and introduce virtual security operations. Founders Alastair Paterson and Bryan Woolgar-O'Neil have extensive experience in entrepreneurship and cybersecurity. Harmonic Security addresses security risks of generative AI and offers effective solutions.

相关推荐 去reddit讨论
The Cloudflare Blog

The Cloudflare Blog -

Cloudflare named in 2024 Gartner® Magic Quadrant™ for Security Service Edge

Gartner has once again named Cloudflare to the Gartner® Magic Quadrant™ for Security Service Edge (SSE) report

Cloudflare连续第二年被Gartner认可为安全服务边缘(SSE)魔力象限报告中的领导者。Cloudflare的执行能力和愿景的完整性受到赞扬。公司在过去14年中对全球网络的投资使其能够比竞争对手更快速、更具成本效益地推出能力。Cloudflare One是其SSE平台,提供一系列安全解决方案,包括零信任访问控制、DNS过滤、安全的SaaS使用、数据保护和数字体验监控。Cloudflare致力于持续改进其SSE平台并提供全面的安全解决方案。

相关推荐 去reddit讨论
解道jdon.com

解道jdon.com -

使用Spring Security 6.1及更高版本保护Spring Boot 3应用

在本文中,我们将探讨如何利用 Spring Security 的最新更新来保护使用最新版本的 Spring Boot 开发的 Web 应用程序的安全。我们的旅程将引导我们创建一个 Spring Boot Web 项目、通过 Spring Data JPA 与 PostgreSQL 数据库集成,以及应用更新的 Spring Security 框架提供的安全措施。本文主要分为两部分。第一部分: 开发具有CRUD操作的员工管理系统在这个初始阶段,我们将专注于打造一个具有基本 CRUD(创建、读取、更新、删除)操作的员工管理系统。我们将为我们的系统奠定基础,为后续的增强奠定基础。第二部分:使用 Spr

本文介绍了如何使用Spring Security 6.1及更高版本来保护使用Spring Boot 3开发的Web应用程序的安全。文章分为两部分,第一部分是开发具有CRUD操作的员工管理系统,第二部分是使用Spring Security增强端点保护的安全性。文章详细介绍了如何配置Spring Security以及如何处理用户认证和授权。此外,还介绍了如何处理CORS问题以及如何与Spring Security集成。

相关推荐 去reddit讨论

热榜 Top10

LigaAI
LigaAI
观测云
观测云
Dify.AI
Dify.AI
eolink
eolink

推荐或自荐