So you’re ready to start a cloud journey, or at least to start thinking about one. I’m sure you’re feeling overwhelmed by the number of questions you have. What are the pros? What are the cons? What cloud services provider should you use?
Whether your journey will direct you toward compute power, database storage, content delivery or other functionality, the cloud delivers what you need to take your organization to the next level of computing in the Amazon Web Services (AWS) cloud. Let’s take a look at AWS 101 and learn why this cloud computing solution makes sense.
Advantages of Cloud Computing
In past blogs, we’ve talked a lot about why cloud computing makes a difference, including this great overview of what you should know before you start the migration process. There are so many benefits to launching a cloud journey.
Moving from legacy solutions to cloud computing delivers massive economies of scale. You’ll enjoy lower infrastructure costs. You immediately slash your IT budget because you don’t have to make a capital investment in servers. You also won’t have to worry about on-premise server maintenance and repairs or backup systems.
Beyond the fact that you avoid huge capital costs, being “serverless” also dramatically increases your business agility. Cloud computing gives you an elastic environment where you simply pay as you go for what you use. You eliminate guessing about the demand for IT resources — and have an always-on computing environment.
Cloud computing infrastructure also is fully redundant, making it more reliable. It provides more powerful backup and disaster recovery solutions across secure regions and availability zones to keep your data accessible and safe 24/7.
Introduction to Critical AWS Services
Besides these advantages, the AWS Cloud offers eight important services. Here’s a snapshot of each of the core AWS services — and what makes them all essential to the AWS Cloud computing experience.
This service, also known as Amazon Elastic Compute Cloud, delivers secure, reliable compute capacity and simplifies elastic web-scale computing for developers, allowing them to build failure-resistant applications. As AWS notes, Amazon EC2 “changes the economics of computing by allowing you to pay only for capacity that you actually use,” and it “allows you to quickly scale capacity, both up and down, as your computing requirements change.” Customers have complete control and the ability to interact with all instances.
AWS Auto Scaling
With this service, you get unified scaling for your cloud applications. You can monitor applications and automatically adjust capacity to ensure you get predictable, cost-effective performance for rapid application scaling for multiple resources across multiple services. And, again, you pay only for what you need, and optimize costs, availability or a balance of the two. You can save dollars by scaling out to meet demand, and then scaling back in (think down) once demand drops.
Amazon Simple Storage Service (S3)
This object storage service offers scalability, data availability, security and performance. It gives customers of all sizes and industries the ability to store and protect any amount of data for a range of use cases such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices and big data analytics. According to AWS, Amazon S3 is “designed for 99.999999999% (11 nines) of durability, and stores data for millions of applications for companies all around the world.”
Amazon Route 53
A highly available and scalable cloud Domain Name System (DNS) web service, Amazon Route 53 gives developers and businesses a reliable, cost-effective way to route users to internet applications. It does this by translating domain names like www.onix.net into the corresponding numeric IP addresses like 192.0.2.1 that computers use to connect to each other. It connects user requests to AWS-backed infrastructure, including Amazon EC2 instances or Amazon S3 buckets. It also can route users to non-AWS infrastructure.
The Amazon Relational Database Service (RDS) easily and cost-efficiently helps users set up, operate and scale a relational database in the cloud. Offering resizable capacity, Amazon RDS automates time-consuming tasks such as hardware provisioning, database setup, patching and backups. It gives users the choice of six familiar database engines, including Amazon Aurora, MariaDB, MySQL, Oracle Database, PostgreSQL and SQL Server.
With Amazon Virtual Private Cloud (VPC), you can provision an isolated section of the AWS Cloud to launch AWS resources within a self-defined virtual network. This gives you complete control over your virtual networking environment. For example, you can have a public-facing subnet that has internet access for your web servers, but keep your backend systems, including databases or application servers, in a subnet that’s private facing and without Internet access.
Amazon Cloud Watch
DevOps engineers, site reliability engineers (SREs), developers and IT managers get actionable data and insights about applications through Amazon Cloud Watch. It allows you to react to system-wide performance changes, optimize resource utilization and get a unified view of the operational health of AWS resources, applications and services that run on AWS and on-premise servers. Data is presented in forms, metrics, logs and events.
With AWS Lambda, you get serverless compute capabilities that allow you to run code for virtually any type of application or backend service without provisioning and running servers. No administration is needed. Upload code and let Lambda handle everything needed to run and scale your code with high availability. You pay for only the compute time you consume.
Shared Security Responsibility
Security and compliance are important factors in your cloud journey. Overall, however, AWS relies on three fundamental approaches to protect cloud infrastructure and data:
- Network and host-level boundaries protection
- System security configuration and maintenance
- Service-level protection enforcement.
When it comes to security in the cloud, AWS works on a shared responsibility model. This means both you, as the customer, and AWS, as the cloud service provider, have security oversight for different aspects of your cloud computing environment.
Customers manage security “in” the cloud. As the cloud service provider, AWS is responsible for the security “of” the cloud. There’s a difference.
AWS protects the crucial infrastructure that runs all of the services offered in the AWS Cloud, including the hardware, software, networking and facilities that run AWS Cloud services.
Customer responsibilities include protecting customer data, platforms and applications using identity and access management tools, proper OS, network and firewall configurations and encryption of data, file systems and factors related to network traffic. Customers are also responsible for security maintenance such as patches and updates.
The actual level of customer responsibility, according to AWS, varies depending on the services used, the integration of those services into their IT environment, — and applicable laws and regulations. Both AWS and customers share the responsibility for managing IT controls as well. You would manage some, AWS would manage others, and there are yet others that you both would oversee.
Security is a vital aspect of your migration plan, deployment and ongoing life in the cloud. Your data and infrastructure are only as secure as you make them. That’s why it’s important to understand your responsibilities before your migration and holding up your end of the shared security bargain.
Readying Your Organization for Cloud Migration
As you start to think about what I’ve presented so far, perhaps you’re either running scared from what it takes to migrate to the cloud, or you’re so psyched that you’re ready to migrate tomorrow. Either way, please take a breath and relax.
You want to start small. Moving to the cloud isn’t something to rush into. In most cases, to get started, you need buy-in from top-level executives and key players across your organization. And this doesn’t mean just your CEO, COO and CIO.
At least one of them definitely needs to be your executive sponsor, but moving to the cloud involves many stakeholders; other managerial decision-makers, the compliance and legal department, your network engineers and architects and database administrators and software developers, to name only a few. These folks make up your “cloud center of excellence” (CCOE) team.
Once you identify these key players, it's time to talk and listen to each other. Don’t be afraid to question the status quo in search of something better. Make discussions experimentation-driven and results-oriented. Don’t be afraid to iterate quickly and fail fast during this phase.
By the time this phase wraps up, you will have discussed the impact a cloud migration will have on your budget and operations and should be able to set a realistic timeline. Planning for the average cloud migration can take up to six months. There’s no “easy button.”
We’ve already discussed security, and I’ll reiterate that it’s the most vital thing you need to address upfront before you make any sort of move to the cloud. At Onix, we call security, “Job Zero”; you plan from there upwards when taking these other essential steps:
Understanding Your Architecture
It’s time to create an up-to-date diagram of your current architecture so you know what migration will entail. Be sure to include networking, subnets, IP ranges, server size, cors, RAM and disk breakdowns to name just a few architectural details to include in the diagram.
Getting a Handle on Your Dependencies
Discuss application flow in your current computing environment. What databases, servers and applications communicate with each other? You can then create a flow diagram and plan your move groups based on connections and dependencies. Remember, connections make processes work; don’t break them during migration.
Deciding to Replicate or Optimize
Take a look at your current computing setup and determine if there are improvements you need to make. Make it a well-architected review so you can be sure you update as much as possible BEFORE the move. This includes cleaning up old data in databases and transferring unused data and files to storage.
You’ll also want to determine if you want to replicate your current on-premise setup, or instead use cloud-native options for a truly cloud-first experience. Then you’ll need to decide if you want to optimize your environment post-migration, or build fresh upfront.
It’s a lot to consider, but it’s also extremely important to the overall success of the migration project to tackle it ahead of time.
Planning for Data Transfer
Next, it’s time to decide how you want to transfer your existing data from your on-prem server to the cloud. There are multiple AWS tools to help you do this…
- AWS VPN
- AWS DirectConnect
- Snowball, a data transfer appliance
- Storage Gateway
- Amazon Database Migration Service (DMS).
Keeping up with the Cloud
I’ve thrown a lot of information at you in this blog, but even so, it’s only a small part of all that the AWS Cloud can provide. And, that cloud is always evolving and improving. Continuing education about what’s new and updated in your Amazon Web Services Cloud is a must.
Encourage and incorporate education for users, and understand any cloud initiative has ongoing time and financial impacts.
Moving to the cloud is the first step in discovering a better way to work. Living in it, knowing it and growing with it is an ongoing process that delivers countless benefits.
Does the AWS cloud sound like something that could help your organization? Learn more cloud migration and transformation with AWS by registering for our Cloud Ready Assessment.