Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
A data center is a physical location that has the computing power to run applications, the storage space to process data, and the networking to connect employees to the resources they need to do their jobs.
Experts have said that cloud-based data centers will replace on-premises data centers, but many organizations have decided that they will always need some applications to be on-premises. Instead of dying, the data center is getting better.
Edge data centers are popping up to process IoT data, so it is becoming more spread out. Technology like virtualization and containers is being added to make it run more efficiently. It is adding things like self-service that are like the cloud. In a hybrid model, the on-premises data center works with cloud resources.
Data centers used to only be available to large companies that could afford the space, resources, and staff to keep them running. Today, there are many different kinds of data centers, such as colocated, hosted, cloud, and edge. In all of these cases, the data center is a locked, noisy, and cold place where your application servers and storage devices can work safely 24 hours a day.
All data centers have the same basic infrastructure, which lets them work reliably and consistently. Basic components include:
Power: For equipment to work all the time, data centers need to have clean, reliable power. For redundancy and high availability, a data center will have multiple power circuits. These circuits will be backed up by uninterruptible power supply (UPS) batteries and diesel generators.
Cooling: Electronics make heat, which can damage the equipment if it isn’t taken care of. Data centers are built to get rid of heat and bring in cool air to keep equipment from overheating. For this complicated balance of air pressure and fluid dynamics, the cold aisles, where the air is pumped in, and the hot aisles, where it is sucked up, have to be in the same place in every room.
Network: All of the devices in the data center are linked together so that they can communicate with one another. And network service providers connect businesses to the rest of the world, making it possible to use business apps from anywhere.
Security: A dedicated data center offers a level of physical security that can’t be reached when computer equipment is kept in a wiring closet or another place that wasn’t built from the ground up to be secure. In a data center that was built just for that purpose, equipment is kept safe behind locked doors and in cabinets with protocols that make sure only authorized people can get to the equipment.
On-premises: This is the traditional data center, which is built on the property of the organization and has all the infrastructure it needs. An on-premises data center costs a lot of money in terms of real estate and resources, but it is good for applications that can’t move to the cloud because of security, compliance, or other issues.
Colocation: A “colo” is a data center that is owned by a third party and managed by that third party for a fee. You pay for the physical space, the power you use, and the ability to connect to the network in the building. Racks in data centers that are locked or caged areas that are locked and keyed provide physical security. To get into the building, you need credentials and biometrics to prove that you are allowed to. In the COLO model, there are two choices: you can keep full control of your resources or choose a hosted option where a third-party vendor takes care of the physical servers and storage units.
IaaS: Cloud providers like Amazon Web Services (AWS), Google Cloud Services, and Microsoft Azure offer Infrastructure as a Service (IaaS). Customers can build and manage virtual infrastructures by having remote access to dedicated slices of shared servers and storage via a web-based user interface.You pay for cloud services based on how many resources you use, and you can grow or shrink your infrastructure on the fly. The service provider is in charge of all equipment, security, power, and cooling. As a customer, you never have access to it in person.
Hybrid: In a hybrid model, resources can be located in different places but still work together as if they were in the same place. A fast network link between the sites makes it easier for data to move faster. With a hybrid configuration, you can keep applications that are sensitive to latency or security close to home while using cloud-based resources as an extension of your infrastructure. A hybrid model also makes it easy to quickly set up and take down temporary equipment, so you don’t have to buy too much to cover business peaks.
Edge: Edge data centers usually hold equipment that needs to be close to the end user, like cached storage devices that hold copies of latency-sensitive data because of performance needs. It is common to put backup systems in an edge data center. This makes it easier for operators to remove and replace backup media (like tapes) that will be sent to storage facilities away from the data center.
Service level agreements (SLAs) are used to build data centers so that they can handle the risk of service interruptions over the course of a calendar year. For more reliability and less downtime, a data center will use more redundant resources (for example, there may be four geographically diverse power circuits in the facility instead of two). Uptime is shown as a percentage, and the number of 9s in the percentage is often called “nines,” as in “four nines,” which means 99.99%.
As you can see, there is a big difference between Tier 1 and Tier 4, and as you might expect, the costs can vary a lot from one tier to the next.
The traditional data center is built on a three-tier infrastructure with separate blocks of computing, storage, and network resources for each application. In a hyper-converged infrastructure (HCI), all three tiers are combined into a single building block called a node. When multiple nodes are clustered together, they form a pool of resources that can be managed by a software layer.
HCI is appealing in part because it combines storage, computing, and networking into a single system. This makes deployments across data centers, remote branches, and edge locations easier and less complicated.
What does it mean to “modernize” a data center?
In the past, the data center was seen as a separate collection of equipment for different uses. As each application needed more space, power, and cooling, new equipment had to be bought and set up, which took time and used up more space, power, and cooling.
With the rise of virtualization technologies, we started to see things differently. Today, we look at the data center as a whole pool of resources that can be logically divided and, as a bonus, used more efficiently to serve multiple applications. Cloud services and application infrastructures can be created on the fly from a single pane of glass. These infrastructures include servers, storage, and networks. When hardware is used more efficiently, data centers can be more efficient and greener. This means that they don’t need as much cooling and power.
How does AI work in the data center?
AI lets algorithms take on the traditional role of Data Center Infrastructure Manager (DCIM), keeping an eye on power distribution, cooling efficiency, server workload, and cyber threats in real-time and making automatic adjustments to improve efficiency. AI can move tasks to resources that aren’t being used as much, find potential failures in parts, and make sure that the pool of resources is balanced. All of this is done with little help from other people.
The data center’s future
The data center is not out of date at all. CBRE, one of the largest commercial real estate investment and services firms, says that new capacity in the North American data center market grew by 17% in 2021. This was mostly due to hyperscalers like AWS and Azure, as well as social media giant Meta.
Every day, businesses create more data, such as business process data, customer data, IoT data, OT data, patient monitoring device data, and so on. And they want to do analytics on that data, either at the edge, on-premises, in the cloud, or in a hybrid model. Companies might not be building brand-new, centralized data centers, but they are updating their existing ones and putting them in more places near the edges.
In the future, demand from technologies like self-driving cars, blockchain, virtual reality, and the metaverse will only drive more growth in the number of data centers.
READ MORE ARTICLES;