Industry Practices and Tools
1.
What is the need for VCS?
Version control is incredibly
important, especially in today's high-paced environment with
increasingly shorter product release cycles. By tracking changes
across all software assets and facilitating seamless collaboration, a
version control system allow development and DevOps teams to build
and ship better products faster.
Modern version control tools
offer the following benefits:
-
Protects Your Most Valuable Assets
-
Scales Efficiently and Flexibly
-
Tracks Changes Accurately
-
Facilitates Seamless Team Collaboration
2.Differentiate
the three models of VCSs, stating their pros and cons
-
Local Data Model: This is the simplest variations of version control, and it requires that all developers have access to the same file system.
Advantages
-
The user has complete control over access to your files and therefore it is really secure in comparison to an online storage where you don't know where your data is stored and who has access to your data
-
The data can be accessed easily and quickly
-
The user does not require an internet connection to access the document
Disadvantages
-
Have to constantly keep back up of data to prevent loss
-
The user is completely responsible for the safety of the data
-
It is more difficult to share your data with others e.g. you need to upload on a hosted server and then either send an email or a link to the intended user
-
Takes up more storage space if you store locally
-
Client-Server Model: Using this model, developers use a single shared repository of files. It does require that all developers have access to the repository via the internet of a local network. This is the model used by Subversion (SVN).
Advantages
Organizations
often seek opportunities to maintain services and quality
competition to sustain its market position with the help of
technologies. Deployment of client-server computing in an
organization will effectively increase its productivity through the
usage of cost-effective user interface, enhanced data storage,
vast connectivity and reliable application services
-
Improved Data Sharing:
Data
is retained by usual business processes and manipulated on a server
is available for designated users (clients) over an authorized
access.
-
Integration of Services:
Every
client is given the opportunity to access corporate information via
desktop interface eliminating the necessity to log into a terminal
mode or processor.
-
Shared Resources Amongst Different Platforms:
Application
used for client-server model is built regardless of the hardware
platform or technical background of the entitled software (operating
system software) providing an open computing environment, enforcing
users to obtain the services of clients and servers (database,
application and communication services)
-
Data Processing Capability Despite the Location:
Client-server
users can directly log into a system despite of the location or
technology of the processors.
-
Easy Maintenance:
Client-server
architecture is distributed model representing dispersed
responsibilities among independent computers integrated across a
network. Therefore, it's easy to replace, repair, upgrade and
relocate a server while client remains unaffected. This unaware
change is called as Encapsulation.
-
Security:
Servers
have better control access and resources to ensure that only
authorized clients can access or manipulate data and server updates
are administered effectively.
Disadvantages
-
Overloaded Servers:
When
there are frequent simultaneous client requests, server severely get
overloaded, forming traffic congestion.
-
Impact of Centralized Architecture:
Since
it is centralized, if a critical server failed, client requests are
not accomplished. Therefore, client-server lacks the robustness of a
good network.
-
Distributed Model: In this model, each developer works directly with their own local repository, and changes are shared between repositories as a separate step. This is the model used by GIT, an open source software used by many of the largest software development projects.
Advantages
-
Give more performance than single system
-
If one pc in distributed system malfunction or corrupts then other node or pc will take care of
-
More resources can be added easily
-
Resources like printers can be shared on multiple pc’s
Disadvantages
-
Security problem due to sharing
-
Some messages can be lost in the network system
-
Bandwidth is another problem if there is large data then all network wires to be replaced which tends to become expensive
-
Overloading is another problem in distributed operating systems
-
If there is a database connected on local system and many users accessing that database through remote or distributed way then performance become slow
-
The databases in network operating is difficult to administrate then single user system
3.Git
and GitHub, are they same or different? Discuss with facts.
Git
is a distributed version control tool that can manage a development
project's source code history, while GitHub is a cloud based platform
built around the Git tool. Git is a tool a developer installs locally
on their computer, while GitHub is an online service that stores code
pushed to it from computers running the Git tool. The key difference
between Git and GitHub is that Git is an open-source tool developers
install locally to manage source code, while GitHub is an online
service to which developers who use Git can connect and upload or
download resources.
One
way to examine the differences between GitHub and Git is to look at
their competitors. Git competes with centralized and distributed
version control tools such as Subversion, Mercurial, ClearCase and
IBM's Rational Team Concert. On the other hand, GitHub competes with
cloud-based SaaS
and PaaS offerings, such as GitLab and Atlassian's Bitbucket.
|
GIT |
GIT
HUB |
|
Installed locally. |
Hosted in the cloud. |
|
First released in 2005. |
Company launched in
2008. |
|
Maintained by the Linux
Foundation.
|
Purchased in2018 by
Microsoft. |
|
Focused on version
control and code sharing. |
Focused on centralized
source code hosting. |
|
Primarily a command-line
tool. |
Administrated trough the
web. |
|
Provides a desktop
interface name Git Gui. |
Desktop interface named
GitHub Desktop. |
|
No user management
features. |
Built-in -user
management. |
|
Minimal external tool
configuration features. |
Active marketplace for
tool integration. |
|
Competes
with Mercurial, Subversion,IBM, Relational Team concert and
clearcse.
And open source
licensed. |
Competes
with Atlassian Bitbucket and GitLab.
And includes a free tier
and pay-for-use tiers. |
4.Compare
and contrast the Git commands, commit and push.
Basically
Git
commit
"records
changes to the repository"
while git
push
"updates
remote refs along with associated objects".
So the first one is used in connection with your local repository,
while the latter one is used to interact with a remote repository.
-
Git commit record your changes to the local repository.
-
Git push update the remote repository with your local changes.
-
staging helps you split up one large change into multiple commits - Let's say you worked on a large-ish change, involving a lot of files and quite a few different subtasks. You didn't actually commit any of these -- you were "in the zone", as they say, and you didn't want to think about splitting up the commits the right way just then. (And you're smart enough not to make the whole thing on honking big commit!). Now the change is all tested and working, you need to commit all this properly, in several clean commits each focused on one aspect of the code changes. With the index, just stage each set of changes and commit until no more changes are pending. Really works well with git gui if you're into that too, or you can use git add -p or, with newer gits, git add -e.
-
staging helps in reviewing changes - Staging helps you "check off" individual changes as you review a complex commit, and to concentrate on the stuff that has not yet passed your review. Let me explain. Before you commit, you'll probably review the whole change by using git diff. If you stage each change as you review it, you'll find that you can concentrate better on the changes that are not yet staged. git gui is great here. It's two left panes show unstaged and staged changes respectively, and you can move files between those two panes (stage/unstage) just by clicking on the icon to the left of the filename. Even better, you can even stage partial changes to a file. In the right pane of git gui, right click on a change that you approve of and choose "stage hunk". Just that change (not the entire file) is now staged; in fact, if there are other, unstaged, changes in that same file, you'll find that the file now appears on both top and bottom left panes.
-
staging helps when a merge has conflicts - When a merge happens, changes that merge cleanly are updated both in the staging area as well as in your work tree. Only changes that did not merge cleanly (i.e., caused a conflict) will show up when you do a git diff, or in the top left pane of git gui. Again, this lets you concentrate on the stuff that needs your attention -- the merge conflicts.
-
staging helps you keep extra local files hanging around - Usually, files that should not be committed go into .gitignore or the local variant, .git/info/exclude. However, sometimes you want a local change to a file that cannot be excluded (which is not good practice but can happen sometimes). For example, perhaps you upgraded your build environment and it now requires an extra flag or option for compatibility, but if you commit the change to the Makefile, the other developers will have a problem. Of course you have to discuss with your team and work out a more permanent solution, but right now, you need that change in your working tree to do any work at all! Another situation could be that you want a new local file that is temporary, and you don't want to bother with the ignore mechanism. This may be some test data, a log file or trace file, or a temporary shell script to automate some test... whatever. In git, all you have to do is never to stage that file or that change. That's it.
-
staging helps you sneak in small changes - Let's say you're in the middle of a somewhat large-ish change and you are told about a very important bug that needs to be fixed asap. The usual recommendation is to do this on a separate branch, but let's say this fix is really just a line or two, and can be tested just as easily without affecting your current work. With git, you can quickly make and commit only that change, without committing all the other stuff you're still working on. Again, if you use git gui, whatever's on the bottom left pane gets committed, so just make sure only that change gets there and commit, then push!
6.Explain
the collaboration workflow of Git, with example.
Collaboration
In
terms of Git process, collaboration is often about branching
workflows. Thinking ahead on how you will intertwine commit trees
will help you minimize integration bugs and support your release
management strategy.
Workflow
A
Git Workflow is a recipe or recommendation for how to use Git to
accomplish work in a consistent and productive manner. Git workflows
encourage users to leverage Git effectively and consistently. Git
offers a lot of flexibility in how users manage changes. Given Git's
focus on flexibility, there is no standardized process on how to
interact with Git. When working with a team on a Git managed project,
it’s important to make sure the team is all in agreement on how the
flow of changes will be applied. To ensure the team is on the same
page, an agreed upon Git workflow should be developed or
selected. There are several publicized Git workflows that may be
a good fit for your team. Here, we’ll be discussing some of these
workflow options.
The
array of possible workflows can make it hard to know where to begin
when implementing Git in the workplace. This page provides a starting
point by surveying the most common Git workflows for software teams.
As
you read through, remember that these workflows are designed to be
guidelines rather than concrete rules. We want to show you what’s
possible, so you can mix and match aspects from different workflows
to suit your individual needs.
A successful Git
workflow
When
evaluating a workflow for your team, it's most important that you
consider your team’s culture. You want the workflow to enhance the
effectiveness of your team and not be a burden that limits
productivity. Some things to consider when evaluating a Git workflow
are:
-
Does this workflow scale with team size?
-
Is it easy to undo mistakes and errors with this workflow?
-
Does this workflow impose any new unnecessary cognitive overhead to the team?
Examples:
7.Discuss
the benefits of CDNs.
1. Your reliability and response times get a huge boost
High
performing website equals high conversion and growing sales. Latency
and speed issues tend to cripple web businesses and cause damage. A
few seconds can mean the difference between a successful conversion
or a bounce. A reliable CDN ensures that the load speed is more than
optimal and that online transactions are made seamlessly.
2. A CDN enables global reach
Over
one third of the world’s population is online, which means that the
global use of the internet has increased exponentially over the last
15 years. CDNs
provide solutions through cloud acceleration with local POPs. This
global reach will eliminate any latency problems that interrupt
long-distance online transactions and cause slow load
times.
3. A CDN saves a lot of money
Hiring
a CDN results in noticeable savings for a business; rather than
investing in an infrastructure and separate service providers all
across the globe, a global CDN can eliminate the need to pay for
costly foreign hosting and thus, save your business a lot of money. A
global CDN offers a single platform to handle all of the separate
operations, working across numerous regions for a reasonable price.
CDNs are also recommended for companies with a tight budget.
4. 100% percent availability
Due
to the distribution of assets across many regions, CDNs have
automatic server availability sensing mechanisms with instant user
redirection. As a result, CDN websites experience 100 percent
availability, even during massive power outages, hardware issues or
network problems.
5. Decrease server load
The
strategic placement of a CDN can decrease the server load on
interconnects, public and private peers and backbones, freeing up the
overall capacity and decreasing delivery costs. Essentially, the
content is spread out across several servers, as opposed to
offloading them onto one large server.
6. 24/7 customer support
Quality
CDNs have been known for outstanding customer support. In other
words, there is a CS
team standby at all time, at your disposal. Whenever something
occurs, you have backup that’s waiting to help you fix your
performance related problems. Having a support team on quick dial is
a smart business decision – you’re not just paying for a cloud
service, you’re paying for a large spectre of services that help
your business grow on a global scale.
7. Increase in the number of Concurrent Users
Strategically
placing the servers in a CDN can result in high network
backbone
capacity, which equates to a significant increase in the number of
users accessing the network at a given time. For example, where there
is a 100 GB/s network backbone with 2 tb/s
capacity, only 100 GB/s can be delivered. However, with a CDN, 10
servers will be available at 10 strategic locations and can then
provide a total capacity of 10 x 100 GB/s.
8. DDoS protection
Other
than inflicting huge economic losses, DDoS attacks can also have a
serious impact on the reputation and image of the victimized company
or organization. Whenever customers type in their credit card numbers
to make a purchase online, they are placing their trust in that
business. DDoS attacks are on the rise and new ways of Internet
security
are being developed; all of which have helped increase the
growth of CDNs, as cloud security adds another layer of security.
Cloud solutions are designed to stop an attack before it ever
reaches your data center. A CDN will take on the traffic and keep
your website up and running. This means you need not be concerned
about DDoS attacks impacting your data center, keeping your business’
website safe and sound.
9. Analytics
Content
delivery networks can not only deliver content at a fast pace, they
can also offer priceless analytical info to discover trends that
could lead to advertising sales and reveal the strengths and the
weaknesses of your online business. CDNs have the ability to deliver
real-time load statistics, optimize capacity per customer, display
active regions, indicate which assets are popular, and report viewing
details to their customers. These details are extremely important,
since usage logs are deactivated once the server source has been
added to the CDN. Info analysis shows everything a developer needs to
know to further optimize the website. In-depth reporting ultimately
leads to performance increase, which results in higher user
experience and then further reflects on sales and conversion rates.
Hiring
a CDN is a growing trend in the internet community. Performance and
security are everything – a CDN is here to deliver that. A high
performance website creates income, growth, web presence and brand
awareness. If your web business is suffering, you should consider
hiring a CDN today.
8.How CDNs differ from web hosting servers?
-
Web Hosting is used to host your website on a server and let users access it over the internet. A content delivery network is about speeding up the access/delivery of your website’s assets to those users.
-
Traditional web hosting would deliver 100% of your content to the user. If they are located across the world, the user still must wait for the data to be retrieved from where your web server is located. A CDN takes a majority of your static and dynamic content and serves it from across the globe, decreasing download times. Most times, the closer the CDN server is to the web visitor, the faster assets will load for them.
-
Web Hosting normally refers to one server. A content delivery network refers to a global network of edge servers which distributes your content from a multi-host environment.
9.Identify
free and commercial CDNs.
Free CDNs
-
Coral Content Distribution Network (Defunct)
Traditional commercial CDNs
-
BelugaCDN[28]
Commercial CDNs using P2P for delivery
10.Discuss
the requirements for virtualization.
|
Operating
Systems
|
|
|
Hard
Drive
|
500
GB recommended.
Depending
upon the number of virtual machines that you are planning to
backup, ensure that the backup server computer has sufficient free
space to store all virtual machine data.
|
|
Memory
|
16
GB RAM minimum required
|
|
Processor
|
All
Windows-compatible processors supported
|
|
IIS
|
IIS
must be enabled on the backup server.
|
11.Discuss
and compare the pros and cons of different virtualization techniques
in different
Levels.
Advantages
of Virtualization
The
advantages of switching to a virtual environment are plentiful,
saving you money and time while providing much greater business
continuity and ability to recover from disaster.
-
Reduced spending. For companies with fewer than 1,000 employees, up to 40 percent of an IT budget is spent on hardware. Purchasing multiple servers is often a good chunk of this cost. Virtualizing requires fewer servers and extends the lifespan of existing hardware. This also means reduced energy costs.
-
Easier backup and disaster recovery. Disasters are swift and unexpected. In seconds, leaks, floods, power outages, cyber-attacks, theft and even snow storms can wipe out data essential to your business. Virtualization makes recovery much swifter and accurate, with less manpower and a fraction of the equipment – it’s all virtual.
Better
business continuity. With an increasingly mobile workforce, having
good business continuity is essential. Without it, files become
inaccessible, work goes undone, processes are slowed and employees
are less productive. Virtualization gives employees access to
software, files and communications anywhere they are and can enable
multiple people to access the same information for more continuity.
-
More efficient IT operations. Going to a virtual environment can make everyone’s job easier – especially the IT staff. Virtualization provides an easier route for technicians to install and maintain software, distribute updates and maintain a more secure network. They can do this with less downtime, fewer outages, quicker recovery and instant backup as compared to a non-virtual environment.
Disadvantages
of Virtualization
The
disadvantages of virtualization are mostly those that would come with
any technology transition. With careful planning and expert
implementation, all of these drawbacks can be overcome.
-
Upfront costs. The investment in the virtualization software, and possibly additional hardware might be required to make the virtualization possible. This depends on your existing network. Many businesses have sufficient capacity to accommodate the virtualization without requiring a lot of cash. This obstacle can also be more readily navigated by working with a Managed IT Services provider, who can offset this cost with monthly leasing or purchase plans.
-
Software licensing considerations. This is becoming less of a problem as more software vendors adapt to the increased adoption of virtualization, but it is important to check with your vendors to clearly understand how they view software use in a virtualized environment to a
-
Possible learning curve. Implementing and managing a virtualized environment will require IT staff with expertise in virtualization. On the user side a typical virtual environment will operate similarly to the non-virtual environment. There are some applications that do not adapt well to the virtualized environment – this is something that your IT staff will need to be aware of and address prior to converting.
For
many businesses comparing the advantages to the disadvantages, moving
to a virtual environment is typically the clear winner. Even if the
drawbacks present some challenges, these can be quickly navigated
with an expert IT team or by outsourcing the virtualization process
to a Managed IT Services provider. The seeming disadvantages are more
likely to be simple challenges that can be navigated and overcome
easily.
Luckily
visualization solutions are evolving as rapidly as the rest of the
tech stack. Charts, videos, infographics and at the cutting edge even
virtual reality and augmented reality (VR & AR) presentations
offer increasingly engaging and intuitive channels of communication.
Here’s
my run-down of some of the best, most popular or most innovative data
visualization tools available today. These are all paid-for (although
they all offer free trials or personal-use licences). Look out for
another post soon on completely free and open source alternatives.
Tableau
is often regarded as the grand master of data visualization software
and for good reason. Tableau has a very large customer base of
57,000+
accounts across many industries due to its simplicity of use and
ability to produce interactive visualizations far beyond those
provided by general BI solutions. It is particularly well suited to
handling the huge and very fast-changing datasets which are used in
Big Data operations, including artificial intelligence and machine
learning applications, thanks to integration with a large number of
advanced database solutions including Hadoop, Amazon AWS, My SQL, SAP
and Teradata. Extensive research and testing has gone into enabling
Tableau to create graphics and visualizations as efficiently as
possible, and to make them easy for humans to understand.
Qlik
with their Qlikview tool is the other major player in this space and
Tableau’s biggest competitor. The vendor has over 40,000 customer
accounts across over 100 countries, and those that use it frequently
cite its highly customizable setup and wide feature range as a key
advantage. This however can mean that it takes more time to get to
grips with and use it to its full potential. In addition to its data
visualization capabilities Qlikview offers powerful business
intelligence, analytics and enterprise reporting capabilities and I
particularly like the clean and clutter-free user interface. Qlikview
is commonly used alongside its sister package, Qliksense, which
handles data exploration and discovery. There is also a strong
community and there are plenty of third-party resources available
online to help new users understand how to integrate it in their
projects.
This
is a very widely-used, JavaScript-based charting and visualization
package that has established itself as one of the leaders in the
paid-for market. It can produce 90 different chart types and
integrates with a large number of platforms and frameworks giving a
great deal of flexibility. One feature that has helped make
FusionCharts very popular is that rather than having to start each
new visualization from scratch, users can pick from a range of “live”
example templates, simply plugging in their own data sources as
needed.
Like
FusionCharts this also requires a licence for commercial use,
although it can be used freely as a trial, non-commercial or for
personal use. Its website claims that it is used by 72 of the world’s
100 largest companies and it is often chosen when a fast and flexible
solution must be rolled out, with a minimum need for specialist data
visualization training before it can be put to work. A key to its
success has been its focus on cross-browser support, meaning anyone
can view and run its interactive visualizations, which is not always
true with newer platforms.
Datawrapper
is increasingly becoming a popular choice, particularly among media
organizations which frequently use it to create charts and present
statistics. It has a simple, clear interface that makes it very easy
to upload csv data and create straightforward charts, and also maps,
that can quickly be embedded into reports.
Plotly
Plotly
enables more complex and sophisticated visualizations, thanks to its
integration with analytics-oriented programming languages such as
Python, R and Matlab. It is built on top of the open source d3.js
visualization libraries for JavaScript, but this commercial package
(with a free non-commercial licence available) adds layers of
user-friendliness and support as well as inbuilt support for APIs
such as Salesforce.
Sisense
provides a full stack analytics platform but its visualization
capabilities provide a simple-to-use drag and drop interface which
allow charts and more complex graphics, as well as interactive
visualizations, to be created with a minimum of hassle. It enables
multiple sources of data to be gathered into one easily accessed
repositories where it can be queried through dashboards
instantaneously, even across Big Data-sized sets. Dashboards can then
be shared across organizations ensuring even non technically-minded
staff can find the answers they need to their problems.
A
hypervisor is a process that separates a computer’s operating
system and applications from the underlying physical hardware.
Usually done as software although embedded hypervisors can be created
for things like mobile devices.
The
hypervisor drives the concept of virtualization by allowing the
physical host machine to operate multiple virtual machines as guests
to help maximize the effective use of computing resources such as
memory, network bandwidth and CPU cycles.
The explanation of a
hypervisor up to this point has been fairly simple: it is a layer of
software that sits between the hardware and the one or more virtual
machines that it supports. Its job is also fairly simple. The three
characteristics defined by Popek and Goldberg illustrate these tasks:
-
Provide an environment identical to the physical environment
-
Provide that environment with minimal performance cost
-
Retain complete control of the system resources
14.How does the emulation is different from VMs?
For
many, emulation and virtualization go hand in hand, but there are
actually some really key differences. When a device is being
emulated, a software-based construct has replaced a hardware
component. Its possible to run a complete virtual machine on an
emulated server. However, virtualization makes it possible for that
virtual machine to run directly on the underlying hardware, without
needing to impose an emulation tax (the processing cycles needed to
emulate the hardware).
With
virtualization, the virtual machine uses hardware directly, although
there is an overarching scheduler. As such, no emulation is taking
place, but this limits what can be run inside virtual machines to
operating systems that could otherwise run atop the underlying
hardware. That said, this method provides the best overall
performance of the two solutions.
With
emulation, since an entire machine can be created as a virtual
construct, there are a wider variety of opportunities, but with the
aforementioned emulation penalty. But, emulation makes it possible
to, for example, run programs designed for a completely different
architecture on an x86 PC. This approach is common, for example, when
it comes to running old games designed for obsolete platforms on
todays modern systems. Because everything is emulated in software,
there is a performance hit in this method, although todays massively
powered processors often cover for this.
Both
methods are used for various purposes and are sometimes confused, so
be aware of the differences.
15.Compare
and contrast the VMs and containers/dockers, indicating their
advantages and Disadvantages.
When
it comes to cloud infrastructure, VMware has long been the go-to
standard for its many advantages: its ability to run multiple OS
environments that don’t affect each other, the choice of virtual
machine types, and its consolidated toolkit that makes VM management
easy. However, consider this: What if you had an alternative to
VMware that was more lightweight, economical and more scalable?
That’s exactly what Docker is - a container technology that lets
users develop distributed applications.
What is a Virtual Machine?
The
concept of a virtual machine is simple, really: it’s a virtual
server that emulates a hardware server. A virtual machine relies on
the system’s physical hardware to emulate the exact same
environment on which you install your applications. Depending on your
use case, you can use a system virtual machine (that runs an entire
OS as a process, allowing you to substitute a real machine for a
virtual machine), or process virtual machines that let you execute
computer applications alone in the virtual environment.
What is Docker?
Docker is
an open source project that offers a software development solution
known as containers. To understand Docker, you need to know what
containers are. According to Docker, a container is ”
a
lightweight, stand-alone, executable package of a piece of software
that includes everything needed to run it.” And since containers
are platform-independent, Docker can run across both Windows- and
Linux-based platforms. In fact, Docker can also be run within a
virtual machine if need be.
The
main purpose of Docker is that it lets you run microservice
applications
in a distributed architecture.
Docker Architecture
Docker’s
architecture is also client-server based. However, it’s a little
more complicated than a virtual machine because of the features
involved. It consists of four main parts:
-
Docker Client: This is how you interact with your containers. Call it the user interface for Docker.
-
Docker Objects: These are your main components of Docker: your containers and images. We mentioned already that containers are the placeholders for your software, and can be read and written to. Container images are read-only, and used to create new containers.
-
Docker Daemon: A background process responsible for receiving commands and passing them to the containers via command line.
-
Docker Registry: Commonly known as Docker Hub, this is where your container images are stored and retrieved.
Common Use Cases
Now
that you have an idea of what VM and Docker containers are, it’s
important to understand the potential use cases for both. While they
are both more or less used to develop applications, here is where the
differences really start.
Real-World Use Case for VMs
Starling
Bank is a digital-only bank that was built in just one year on VMs
provided by AWS.
This is possible because of the efficiency virtual machines deliver
over traditional hardware servers. Importantly, it cost Starling Bank
just a tenth of traditional servers.
Real-World Use Case for Docker
Paypal
uses
Docker to drive “cost efficiency and enterprise-grade security”
for its infrastructure. Paypal runs VMs and containers side-by-side
and says that containers reduce the number of VMs it needs to run.
-
Application development: Docker is primarily used to package an application’s code and its dependencies. The same container can be shared from Dev to QA and later to IT, thus bringing portability to the development pipeline.
-
Running microservices applications: Docker lets you run each microservice that makes up an application in its own container. In this way, it enables a distributed architecture.
Docker Containers vs. VMs
At
last, we arrive at the big question: how are the two different? It
all comes down to what you want to do with them. Below, we’ll
mention a few advantages of Docker as opposed to a virtual machine
(specifically Docker vs. VMware), and vice versa.
Pricing
comparison
-
Docker: Free. Enterprise Edition starts at $750/node/year.
-
VMware vSphere: Standard license starts at $995.
Advantages of Virtual Machines
-
The tools associated with a virtual machine are easier to access and simpler to work with. Docker has a more complicated tooling ecosystem, that consists of both Docker-managed and third-party tools.
-
As mentioned earlier, once you have a virtual machine up and running, you can start a Docker instance within that VM, and run containers within the VM (which is the predominant method of running containers at present). This way, containers and virtual machines are not mutually exclusive and can co-exist alongside each other.
Advantages of Docker Containers
-
Docker containers are process-isolated and don’t require a hardware hypervisor. This means Docker containers are much smaller and require far fewer resources than a VM.
-
Docker is fast. Very fast. While a VM can take an at least a few minutes to boot and be dev-ready, it takes anywhere from a few milliseconds to (at most) a few seconds to start up a Docker container from a container image.
-
Containers can be shared across multiple team members, bringing much-needed portability across the development pipeline. This reduces ‘works on my machine’ errors that plague DevOps teams.
Comments
Post a Comment