Opensource.com

Opensource.com -

What cloud developers need to know about hardware

What cloud developers need to know about hardware JayF Wed, 03/08/2023 - 03:00 It's easy to forget the progress that people in tech have made. In the early 2000s, most local user groups held regular install fests. Back then, to configure a single machine to run Linux well, we had to know intimate details about hardware and how to configure it. Now, almost twenty years later, we represent a project whose core ideal is to make getting a single computer to run Linux as easy as an API call. In this new world, operators and developers alike no longer have to worry about the hardware in their servers. This change has had a profound impact on the next generation of operators and developers. In the early days of computer technology, you had to put your hands on the hardware frequently. If a computer needed more memory, you just added it. As time passed, technology also evolved in big ways. This ended up moving the operator further from the hardware. What used to be a trip to the data center is now a support ticket to have remote hands on the hardware. Eventually, hardware was disposed of altogether. Instead, you now summon and destroy "servers" with simple commands and no longer have to worry about hardware. Here is the real truth: hardware exists because it is needed to power clouds. But what is a cloud, really? Why hardware is critical to the cloud A cloud is a centralization of foundational resources built upon utilizing abstractions. It can range from being as simple as a hypervisor running a few VMs in your homelab to levels of complexity that include custom servers, networking gear, containers, and technology that's been designed from the ground up to focus on efficiencies of scale. They are nebulous. They evolve. Those entering technology today don't have the same hands-on experiences as more experienced developers had. Many are trained to use clouds from their earliest interactions with computers. They don't know a world without a button to change the memory allocation. They can point their attention to higher levels in the technology stack. Yet without an understanding of the foundations the infrastructure they use is built upon, they are implicitly giving away their opportunity to learn the lower levels of the stack, including hardware. No fault exists here because the implementer and operator of the cloud infrastructure have made specific choices to intentionally make their products easier to use. This means that now, more than ever, you have to think intentionally about what trade-offs you make — or others make — when choosing to use cloud technologies. Most people will not know what trade-offs have been made until they get their first oversized cloud bill or first outage caused by a "noisy neighbor". Can businesses trust their vendors to make trade-offs that are best for their operations? Will vendors suggest more efficient or more profitable services? Let the buyer (or engineer!) beware. [ Related read 5 things open source developers should know about cloud services providers ] Thinking intentionally about trade-offs requires looking at your requirements and goals from multiple perspectives. Infrastructure decisions and the trade-offs therein are inherent to the overall process, design, or use model for that project. This is why they must be planned for as soon as possible. Multiple different paths must be considered in order to find your project a good home. Explore the open source cloud Free online course: Developing cloud-native applications with microservices eBook: Modernize your IT with managed cloud services Try for 60 days: Red Hat OpenShift Dedicated Free online course: Containers, Kubernetes and Red Hat OpenShift What is Kubernetes? Understanding edge computing Latest articles for IT architects First, there is the axis of the goal to be achieved, or the service provided. This may come with requirements around speed, quality, or performance. This can in itself drive a number of variables. You may need specialized hardware such as GPUs to process a request with acceptable speed. Will this workload need to auto-scale, or not? Of course, these paths are intertwined. The question already jumps to "Will my wallet auto-scale?" Business requirements are another part of this to consider. Your project may have specific security or compliance requirements which dictate where data is stored. Proximity to related services is also a potential concern. This includes ensuring a low-latency connection to a nearby stock exchange or ability to provide a high-quality local video cache as part of a content delivery network. Then there is the final part which is the value and cost of the service provided — how much one wishes to or can spend to meet the requirements. This is tightly bound  with the first path. The "what" your business is and "how" your business operates. This can be something as mundane as whether your business prefers CapEx versus OpEx. [ Also read Cloud services: 4 ways to get the most from your committed spend ] When looking at these options it is easy to see how changing any one variable can begin to change the other variables. They are inherently intertwined, and some technologies may allow for these variables to shift dynamically. Without understanding lower layers of substrate, you risk taking paths that further this dynamic model of billing. For some, this is preferred. For others, it can be dreaded. Even though learning hardware-specific knowledge has become more optional in modern technology stacks, we hope this article has encouraged you to look into what you may be missing out on without even knowing. Hardware improvements have been a large part of feature delivery and efficiency gains, shrinking computers from room-sized monstrosities to small enough to implant inside a human. We hope you take time to stop, learn, and consider what hardware platform your next project will be running on, even if you don't control it. If you are a student who hasn't gotten their head out of the clouds yet, go find an old computer, install a stick of RAM, and challenge yourself to learn something new. The cloud is everywhere, so hardware is more critical than ever. Image by: Photo by Ian StaufferonUnsplash SCaLE Hardware Cloud What to read next This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License. 33 points Palm Springs, CA Julia has been working in technology for over twenty years, first starting in basic IT, complex networking, systems engineering, and even to building entire data centers. Eventually she moved into automation and software engineering, with a passion of making the world a better place. Julia's day job is with Red Hat, Inc as a Senior Principal Software Engineer. At Red Hat, she has worked on Red Hat's OpenStack Platform and OpenShift product offerings in the area of automated systems deployment. Among her many other jobs, she serves on the Board of Directors of the OpenInfra Foundation, and presently serves as Chair of the Board as well. She also works on the Ironic Project (part of OpenStack), largely focusing on standalone use cases and improved operator experience which also led to her serving as the project leader for a number of years. | Follow ashinclouds | Connect juliaashleykreger Open Enthusiast Author Register or Login to post a comment.

近二十年来,技术发展取得了巨大进步,使得操作者和开发者不再需要担心服务器的硬件,云技术的发展使得操作者和开发者可以通过简单的命令来召唤和销毁“服务器”,但是,云技术的发展也使得新一代的操作者和开发者缺乏对硬件的实践经验,因此,他们必须有意识地考虑使用云技术时所做的权衡。

cloud hardware 云技术 开发者 技术发展 操作者 服务器

相关推荐 去reddit讨论

热榜 Top10

Dify.AI
Dify.AI
LigaAI
LigaAI
eolink
eolink
观测云
观测云

推荐或自荐