Your address will show here +12 34 56 78
The number of enterprises which adopt Automation for their daily Operations is rapidly growing day by day. Automation is capable of revolutionizing the IT service delivery and support, provided it is executed with the right set of planning and focus. Proper planning and setting the right level of expectations will result in at least one of the primary benefits – Cost Reduction, Revenue Generation, Risk Mitigation, Quality Improvement. So, what are the checklist for starting Automation journey.

1.What is your end goal?
2.Do you have 4 primary benefits (Cost Reduction, Revenue Generation Risk Mitigation and Quality Improvement?
3.Do you target one or more groups?
4.Do you have COE groups focusing on Automation for the entire company?
5.Are your task automated in IT jobs, Service Requests, DevOps etc?
6.Do you have some sort of automation already in place?
7.Have you already invested in tools and technologies to support Automation

Once you have these clearly defined, you can now plan for your automation journey. As in any other initiatives to transform what you have now, Automation also should be started by analysing the existing landscape and conditions.

Current workload Analysis
Your current workload could be in terms of number of jobs, tickets, requests, calls etc. Collect as much as data as possible and group these into various categories. Most of the data would be non-standard but even an excel-based filtering and sorting will give a good idea about where your team is spending more effort and money. Go with the 80:20 rule to pick your candidates for immediate automation and design your Service Catalog around these.
Right Process Identification
Most analysis will result in revelations about incorrect or inappropriate processes being followed for most of the existing workflows. Brainstorm with the concerned team and define/standardize the process flows – including stakeholders involved, approvals required, exceptions to be handled. The Service Catalog and the Process flows will result in a more self-service centric IT delivery system.
Plan Execution
Start by segregating the automation candidates into
Start Small – Few cases may start showing immediate results in automation achieved. Start with one small area like Account Creation or Password Resets.
Self-Serve Automation – Next focus can be shifted to the cases which needs a shift from generic incidents to self-service requests. Once the shift is done, these can be automated in the 2nd phase of your automation journey
AI Based – These could be the cases which needs patterns to be analyzed from the collected data and intelligently handled in automation.

Training & Communication -Automation doesn’t yield the same results as it is intended to be unless used and in the right way. All the parties involved from the end-users to IT should be communicated upfront about the plan and adequate training to be part of the overall execution plan. The actual benefits of automation to be demonstrated to each group in terms of time or effort savings.

Feedback and Improvements -Automation is not a one-time project. It has to run in feedback cycles to find out more exceptions and to add those to the backlog. A systematic and regular audit of the automation results to be done to validate against the expected outputs. Organizations will adopt new technologies, tools and application periodically which can be part of automation scope in next phase. Relevance Lab’s AI driven Automation Platform comes in with pre-built library of Automation BOTs for mundane tasks in IT in areas like Identity & Access Management, Infrastructure Provisioning, DevOps, Monitoring & Remediation. Please get in touch with marketing for more details on starting your Automation journey.
About Author
Ashna Abbas is Director of Product Management at Relevance Lab.She is a Software professional with 12+ years of experience in product development and delivery.
RelevanceLab ITSM Automation DevOps

You need the experience to get experience. Most job seekers out of university can relate to this when they’re looking to land a specific full-time role. One way by which this is possible is working for a particular period of time that serves as a springboard to help them develop expertise as well as come to grips with how individuals conduct themselves in the corporate environment.

An internship, as they call it is the perfect response to the “you need experience to get experience” conundrum that job seekers face. It also helps employers source a steady stream of talent that could help spur innovation within the organization itself.

In other words, offering benefits to both parties. This includes those who are seeking employment as well as corporations or small businesses that are looking for skilled, sincere employees.
What You Need to Know About An Internship

So, what is an internship? Why does it matter, in today’s job market?
An internship, simply put, is a period when you work with an employer, in a paid or unpaid role, in order to gain valuable work experience in the role of your choosing. This helps you gain experience that will help you transition into your desired role.

While it is common knowledge that getting a relevant and skill-based degree can increase your chances of being gainfully employed, completing an internship can substantially raise your chances of finding employment sooner than most.

In fact, statistics reveal that undergraduate students who complete at least one internship during their time at university tend to fare better when it comes to finding full-time work in the near future compared to those who do not.

Speaking of the future, an internship provides students with a taste of what is to come before they finally get their first job at a corporation. Not only will you learn how to perform the tasks necessary at your assigned role but you will also get the first-hand experience of working in such an environment.

Internships Offered at Relevance Lab

This is precisely what we offer at Relevance Lab too, where we source talent from a variety of universities around the country. Not only do interns benefit from obtaining relevant experience and expertise but our senior employees are able to hone their leadership skills through training programs.

Interns get an overview of the corporate work culture, business workflow and how their learning gained from time spent at university is implemented in the real world. Some of our interns have been recruited from reputed engineering colleges like AMC Engineering, MIT Manipal, Amity, NIT Surathkal and IIT Kharagpur.

Some of the projects that our interns have been involved with include:

  • Bill of Material explosion at Scale using Spark/Scala
  • Business Intelligence reporting by downloading Google Analytics
  • Google API at scale
  • Inventory Health Dashboard for supply chain analytics

That said, there are a slew of benefits that are on offer to both an employer and a potential employee if the latter is shown to prove his or her skills during the internship period. Even if some interns will have to look for work elsewhere, there are still several benefits to both parties and which we will address next.

Our Internship Program – Benefits

So, what are some of these benefits?

Apart from organizations utilizing a steady source of fresh talent and freshers being able to get important work experience? Yes, there are several more and which is why interested individuals must seriously consider applying for an internship with us.

In particular, some of the benefits of applying for an internship with Relevance Labs include:

  • Edge in the job market
  • Gaining valuable work experience
  • Developing and refining skills
  • Networking with professionals in the field
  • Effortless transition into a full-time position

As for how our internship program benefits Relevance Labs itself, these include:

  • Locating a steady stream of new potential employees
  • Increased visibility on college campuses
  • Test-driving the talent
  • Obtaining a fresh perspective on old problems
  • Fostering leadership skills in current employees
  • Enhanced Social Media reach & Brand awareness

Of course, if you want to get started towards a successful career in the specialized IT services that we offer, you have to have experience to get experience, right?

About Author

Sampriti Banerjee is Marketing Executive at Relevance Lab.


There are occasions when one feels fulfilled and has a sense of accomplishment. Recently, I had such an experience and hence thought of penning down my thoughts here.

Access to clean water and sanitation is one of the biggest problems faced in India. One can either complain about it or take some decisive action. So, as a part of our Corporate Social Responsibility (CSR) initiative, my organization (Relevance Lab) decided to contribute towards hygiene and education as key themes.

We partnered with Child Help Foundation (CHF), an NGO that has a pan-India presence and works in the best interests of children in areas such as education, health, food, and shelter.

With our contribution, CHF took up a sanitation project and built two washrooms at the Government Lower Primary School in Guttahalli, in Karnataka’s Kolar district. We also contributed towards commissioning a rooftop water tank to ensure uninterrupted water supply and installed a UV-based water filter.

We enjoyed the drive to scenic Guttahalli, which is about 50 km from hustle and bustle of Bangalore. The place is known for its ‘silk and milk’ heritage. We were overwhelmed by the hospitality of the teaching staff and student community.

Officially, the project was inaugurated by the schoolchildren with the assurance that they would follow the recommended hygienic practices. They were so excited and enthusiastic that all of us felt very motivated. We distributed sweets and shared some toys with them.

The entire experience was both touching and motivational. The sparkle in the children’s eyes gave us a sense of accomplishment. It’s going to motivate me for the rest of my life – to do something good for a social cause!

That day, I also understood that as balanced, engaged, sustainable, and matured entities, organizations need to show their commitment towards important economic and social causes and contribute to the best of their ability. After all, businesses cannot be successful when the society around them fails!

About Author

Neeraj Deuskar is the Director and Global Head of Marketing for the Relevance Lab.


In this era of digital transformation, organizations tend to be buried under a humongous amount of data or content. Websites form an integral part of any organization and encapsulate multiple formats of data, ranging from simple text to huge media asset files.

We see many business requirements to regroup/reorganize content, consolidate multiple sources of data, or convert legacy forms of data into new solutions. All these requirements involve content migration at its own scale, depending on the amount of data being migrated.

A common use case in any content management solution is how to move heavy content between the instances. AEM involves various methods, such as vlt process, recap, and package manager. Each option has its own pros and cons, but all of them have a common disadvantage: content migration takes a lot of time.

To overcome this, the latest versions of AEM have started supporting Grabbit as one of the quickest ways to transfer content between Sling environments. As per the AEM 6.4 documentation, there are two tools recommended for moving assets from one AEM instance to another.

Vault Remote Copy, or vlt rcp, allows you to use vlt across a network. You can specify a source and destination directory and vlt downloads all repository data from one instance and loads it into the other. Vlt rcp is documented at

Grabbit is an open source content synchronization tool developed by Time Warner Cable (TWC) for their AEM implementation. Because Grabbit uses continuous data streams, it has a lower latency compared to vlt rcp and claims a speed improvement of two to ten times faster than vlt rcp. Grabbit also supports synchronization of delta content only, which allows it to sync changes after an initial migration pass has been completed.

AEM 6.4 and Grabbit – possible?

We see a lot of questions in Adobe forums and TWC Grabbit forums asking if AEM 6.4 really supports Grabbit content transfer. The answer is yes!

Let’s look at the steps that needs to be followed to use Grabbit in an AEM 6.4 instance to make it work across environments.

STEP 1: Install the following packages. Ensure the Grabbit package is installed at the end.

1. Sun-Misc-Fragment-Bundle-1.0.0

2. Grabbit-Apache-Sling-Login-Whitelist-1.0

3. Grabbit-Deserialization-Firewall-Configuration-1.0

4. Grabbit-7.1.5

STEP 2: Add the twcable Grabbit package to the Sling login admin whitelist – com.twcable.grabbit

STEP 3: Adjust the Deserialization firewall configuration in the OSGi console.

Ensure the following items are removed from the blacklist:


Ensure to add this in the whitelist



To ensure Grabbit is successfully installed, try hitting the Grabbit URL to fetch the list of transactions or jobs (http://<host>:<port>/grabbit/transaction/all). If this returns an empty list [], Grabbit is successfully installed and ready to be used to send or receive data.

While running in Windows, to initiate the Grabbit job, if anything goes wrong, it is difficult to get the error details, as the command window closes immediately. One cannot see the error code returned from the job. To get the error code/message in the command prompt window, comment out the Clear command inside the Else block of the newGrabbitRequest() function in

This will help you review the errors and resolve them effectively.

We have been successful in migrating content between AEM 6.2 and 6.4 instances and between 6.4 AEM instances using Grabbit.

Try these steps to install and use Grabbit without any hiccups, and make your content migration quick and smooth.

Enjoy your content migration!

About Author

Saraswathy Kalyani is an experienced AEM and Portal consultant at Relevance Lab.



ChefConf is a global gathering that allows the DevOps community to learn, share ideas, and network. By sharing real-world examples of how organizations solve problems to deliver business value, ChefConf is all about tactics, strategies, and insights for transformational application delivery organizations.

Relevance Lab was a Silver Sponsor at ChefConf 2019 in Seattle. As a strategic Chef Partner, Relevance Lab provides end-to-end Chef Factory solutions to deliver real business value to enterprises, helping them build automated, well-governed, secure, and industry-compliant cloud environments.

At the event, Pradeep Joshi, Senior Director of DevOps at Relevance Lab, was interviewed by Chris Riley of Digital Anarchist, an all-new video platform from the MediaOps family of brands. Pradeep spoke about Relevance Lab’s presence in the DevOps domain for the past eight years and the various services that the company offers.

As a Chef Partner for six years, and focused on DevOps automation, Relevance Lab offers services such as infrastructure automation, configuration management, and continuous deployment, among others. Pradeep explained how DevOps has transformed businesses over the years. Projects used to start with hardware procurement, move on to application capacity planning, and go around in cycles. Things move a lot faster now as hardware is much more affordable and the cloud offers servers in a matter of minutes. At the same time, the mindset of people and the culture of organizations have also changed. From senior management to lower-level employees, people have been more accepting of these changes.

Pradeep reaffirmed that automation is key to the success of organizations across the world. According to him, “what to automate” is a tougher decision to make than “how to automate”. For instance, when there are different teams (IT, software, applications, database, production support, etc.) working together, Excel sheets, emails, and chats among the groups could delay processes to a large extent. When there’s a need for faster deployment or when there are configuration changes for the production team, people are skeptical about how such tasks can be automated. Primitive and inefficient ways slow down processes, and this is an area where products like Chef help automate processes through code. Relevance Lab advises its clients that infrastructure, security, applications, and compliance should be code. All of this can be achieved with automation.

On being asked how the automation idea began for Relevance Lab’s clients, Pradeep said it all started with a problem statement. There is always a need or a problem to be solved, and Relevance Lab is keen on understanding the exact problem. Is shipping the applications a major issue? Is managing configurations more taxing? Is there a cultural block in the organization that makes employees resistant to change? It is natural for employees to feel some anxiety while moving to the cloud; they often feel it is not safe to put production data on the cloud. According to Pradeep, this mindset should change and evolve with frequent meetings, discussions, and constant mentoring. Employees need to understand the benefits of moving to the cloud; they should be more agile, be favorable to market changes, and eventually get used to the new ways of doing things.

With affordable infrastructure available at the click of a button, people should start thinking from an application point of view, such as: what does my application need for deployment? After procuring hardware and scripting automation, Pradeep says the next big change is going to be all about doing things more intelligently. In this regard, Relevance Lab has come up with a new framework called BOTs that enables automation of mundane tasks such as password reset, user creation/deletion, and data backup.

 Pradeep concluded the discussion by emphasizing on the growing need to separate the tasks that need to be done by humans and those that can be automated. After all, automation allows an organization to get a lot more things done in a day, ultimately boosting efficiency and enhancing productivity.

(This blog is based on the video interview taken up during ChefConf 2019 by MediaOps and the link of the original video can be found here)

Video Courtesy: Digital Anarchist



As part of its application development services, Relevance Lab has partner with ServiceNow to implement its new Enterprise DevOps offering. This partnership enables “intelligent software change management” based on data inputs from various underlying DevOps tools. 

Enterprise DevOps is a collaborative work approach that brings a tactical and strategic balance to businesses. Relevance Lab’s expertise in implementing automated pipelines around Infrastructure Automation, Configuration Management and Continuous Deployments will help in implementing end-to-end solutions to customers as they embrace ServiceNow Enterprise DevOps.

The initial release of ServiceNow Enterprise DevOps has elements that provide for some specific use cases. 

Integration: Out-of-the-box integrations to standard tools in the DevOps toolchain is one among the primary use cases. Planned examples include GitHub, GitLab, Jenkins, Jira and accessing data from ServiceNow Agile Development (Agile 2.0) and other ServiceNow products.

Automation: The first use case in automation will be to leverage data from integrations to connect to ServiceNow ITSM Change Management. This will simplify the use of Change Management features and APIs to assess changes from the DevOps pipeline and to automate them where appropriate. Change approval policies will be a core component of this automation. Refer to this blog post for more information—the DevOps product will add an out-of-the-box capability to the whole process.

Shared Insights: With end-to-end perspectives of the DevOps toolchain, there will be unique insights into development and operations. This includes information for developers on production data and information for services on changes such as the ability to trace a switch back to the changes in the original code commit and report on test runs.



When a digital product is developed in and for a particular geography and market, the enterprise and developer/architect function is focused on getting it to market first. Once it matures and receives greater engagement, enterprise is then looking to continuously add and develop a stable product that is feature-rich, scalable and robust. So, when the opportunity to take the product global arises, the design and development of the product encounters a whole set of challenges that is not accounted for in the initial stages of development. This is when the localization-versus-internationalization challenges take root.

We have compiled a few quick hacks that you can use as your checklist for a smoother transition.

Let’s first define internationalization and localization. As per the World Wide Web Consortium (W3C), localization is defined as “the adaptation of a product, application or document content to meet the language, cultural and other requirements of a specific target market (a “locale”).” Internationalization is defined as “the design and development of a product, application or document content that enables easy localization for target audiences that vary in culture, region, or language.”

So, what are the typical factors that teams of designers, architects and developers need to first establish, foreseeing internationalization?

Legal and Regulatory Guidelines

Different regions, geographies, countries and markets have different regulatory guidelines that can extend to several facets including organization practices, trademark requirements, currency repatriation, tax compliance, regulatory compliance, duties and corporate agreements and contracts, currency repatriation and so on. If you’re working on a product that you need to go-to-market in China for instance, then it is important to understand the legal and regulatory framework within which you can operate.

Hosting Location, Global Distribution/Content Delivery Network Infrastructure

Different geographies, different networks and different internet speeds are factors one must consider when taking your product global. How can a product be designed to ensure that it “loads” quickly for customers across regions? Should there be a central database or should it be distributed? Who, where and how will distribution of data be managed? These are critical questions to ensure there are no customer drop-offs.

Unicode Support

Today, Unicode allows support in most of the world’s writing systems. Enabling its proper usage that can support local, regional or even cultural context is critical. For instance, supporting special characters from different languages that are outside the ASCII boundaries (defines 128 characters in a typical English keyboard) is extremely important. Unicode defines 221 characters (characters from all recognized languages in the world), thus, UTF-8/16 encoding format should be supported when you’re planning to release your product in different geos. This is a crucial requirement for database, DB connection, build setup and server startup options—just to name a few.

If you’re fond of hard coding text, think again. Creating text and media that is easily editable, gives you the flexibility to easily localize and adapt your product for different regions.

Multi-Lingual Support

English may be one of the most spoken languages in the world, but that doesn’t necessarily mean every region or market thinks, reads and speaks English. For instance, if you’re releasing a product in Japan, China or even India (with its plethora of scripts for different languages), multi-lingual support for your product is essential. Here are a few things to follow:

  • Maintain language-specific resource files for each support language (eg., with key value pairs—the key is used in the code to place the text, and value is the language specific translated text.
  • All text on the screen, messages and label text must be sourced from resource files, and there should be no referencing of text directly in the code.
  • Choose to keep the resource files at the back end/front end. Front end is recommended as it helps improve performance, and by avoiding calls to the back end and toggling, it moves fast.
  • UI framework should support translation features.

Think about text expansion in different languages, in terms of number of characters and how it can affect the UI and UX of your product. The same holds good for languages that are written from left to right, as well as translations.

Scalable Framework for Geo-Specific Customization

Development framework must consider the UI/UX, branding, orientation and size of the product when going i18n. For instance, the color red could mean different things in different countries and cultures—in China it could mean endurance, in India it could mean purity, in Europe it could mean self-sacrifice or in South Africa it could mean grief and sorrow. Therefore, it is extremely crucial to understand different nuances of cultural significance while designing the UI for a great UX. A few other factors to keep in mind:

  • Tag-based framework for content picks up where the content is tagged to language/country, and gets picked for users having matching profiles, for the tag values. This way, you may have content for various languages, but a user in Spain gets to see only the Spanish content just by setting his language preference in your system.
  • Orientation and sizing adjustments—specific CSS to handle alignment/size specific customization.
  • Navigation (left to right), enable/disable certain options through CSS customization.
  • Localization-based content.
  • Language-based content.

Developing a framework for rapid transition to international markets requires a thorough think-through from a product enablement perspective, keeping in mind operational efficiency without impacting product behavior. Using this checklist will help you save time, money and rework when you finally decide to go i18n.

About the Author: 

Ruchi is the Director- Solution Architect with Relevance Lab. She has around 20 years of experience in leading execution of projects for various customers in technical leadership roles. She has been involved in designing, implementing and managing enterprise grade, highly scalable i18n solutions catering to multiple geographies with diverse needs.

(This blog was originally published in and can be read here )




In last year’s Google I/O conference, when Sunder Pichai showed the demo of an AI assistant that can schedule appointments, make calls on our behalf, book tables at a restaurant etc, all the imaginations about AI came into reality. It felt as though Pichai was talking to the Genie of Aladdin who fulfils day-to-day mundane operational tasks that can be made simpler with the help of Artificial Intelligence. Similarly, the mission of AIOps is to make the job of IT Operations simpler and more efficient.

According to Gartner, AIOps—the “Artificial Intelligence” for IT Operations—is already making waves in the way IT Ops team work. One of the important use cases of digital technologies is how AIOps is becoming pervasive in the IT world and how it is transforming the traditional IT management techniques. While digital transformation enhances Cloud adoption for enterprises, there is an increasing need to manage “Day 2 Cloud scenario” more efficiently in order to realize the true benefits of cloud transformation. AIOps helps IT Operations teams automate and enhance their operations using analytics and machine learning. This enables them to analyze data collected from various tools and devices and predict and resolve IT issues in real-time, ensuring that business services are always available to business users. This is important for any organization that operates in a “service-driven” environment.

 Here are various components of AIOps:

1)      Data Ingestion: This is a core capability of any AIOps tool. These tools process data from disparate and heterogeneous sources. The principle of AIOps is based on the technique of machine learning and data crunching. Hence, it is important to ingest various datasets available that determine the key success parameters for IT operations. They involve data collected from various performance monitoring tools, service desk ticket data, individual system metrics, network data etc. Due to the voluminous and exponentially increasing nature of data, it is very difficult to track all these datasets manually and determine their impact on day-to-day IT Operations.

2)      Forming the Model/Anomaly Detection: Once the data ingestion layer is created, the next important aspect of an AIOps system is the ability to form a model of what is normal. Once the system forms this, the capability of Anomaly detection can be built on that. Any parameter that deviates from normal can be flagged as an anomaly that could lead to outages, hampering the availability of business services. Machine learning can be applied for Anomaly detection. However, it is best applied to specific use cases where patterns and actions are repeatable. This is the step where self-learning capabilities are injected in the system.

3)      Prediction of Outages: Once the system starts determining what is normal and what is an anomaly, it becomes easier to predict outages, performance degradation or any other condition that affects the overall business built on the model. For instance, an increase in database queue sizes could lead to increase in transaction times for online payments, which could in turn lead to abandoned items in shopping carts. The AIOps tool should be able to predict such a pattern and flag it.

4)      Actionable Insights: The system should be able to look at past data of actions taken and provide recommendations on the possible actions that can prevent downtime of business services. Past actions could be tracked via the ITSM tickets created for past incidents or through knowledge-base articles that have been associated with those tickets.

One of the important use cases for AIOps that we have been implementing for our enterprise client is storage management. In a typical production environment, IT Operations team would get an alert when the disk is close to its full capacity. As a result of this, the responses from a particular node become weaker. Through intelligent monitoring and correlation analysis, the exact reasoning can be determined and the storage capacity is automatically adjusted by adding new volumes proactively and functioning of that node can be restored to normal level.

There are other use cases of AIOps in capacity management, resource utilization etc. which could make the life of IT Ops team much simpler. The day is not far when a CIO takes the avatar of Aladdin and the Genie shows up in the form of an AIOps tool.

About the Author:

Sundeep Mallya is the Vice President and Head of Engineering for RL Catalyst Product at Relevance Lab.



We are delighted to inform that Relevance Lab has partnered with Google Cloud Platform (GCP) as a “Technology Partner” and has been listed in the GCP partner directory. Our product RL Catalyst has also been integrated and certified by GCP.

With GCP increasingly gaining market traction and now being among the top 3 Public Cloud Providers, it is a natural progression of our product strategy to align with GCP. With the integration of RL Catalyst with GCP, we are uniquely positioned to offer our customers an end-to-end integrated DevOps-to-ITOps management story, with ability of Multi & Hybrid Cloud management including our investments around Automation to optimize costs and increase productivity.

With Google’s focus on Data and Business Process solutions and basis their inputs, we also plan to leverage Google Dialog Flow for building enterprise automation solutions around workflows, Chat bots, etc. This is in line with the frictionless business initiatives of most enterprises wherein they want to become more agile by simplifying processes, reduce manual efforts in IT services & costs, optimize IT infrastructure usage & costs, increase business service availability, etc. by leveraging new gen tech like DevOps, Cloud and Automation, while integrating/sun-setting their legacy monolithic apps and IT infrastructure.



Globally, organizations have embraced cloud computing and delivery models for their numerous advantages. Gartner predicts  the public cloud market is expected to grow 21.4 percent by the end of 2018, from $153.5 billion in 2017. Cloud computing services provide an opportunity for organizations to consume specific services with delivery models that are most appropriate for them. They help increase the business velocity and reduce the capital expenditure by converting it into operating expenditure.

Capex to Opex Structure

Capital expenditure refers to the money spent in purchasing hardware and in building and managing in-house IT infrastructure. With cloud computing, it is easy to access the entire storage and network infrastructure from a data center without any in-house infrastructure requirements. Cloud service providers also offer the required hardware infrastructure and resource provisioning as per business requirements. Resources can be consumed according to the need of the hour. Cloud computing also offers flexibility and scalability as per business demands.

All these factors help organizations move from a fixed cost structure for capital expenditure to a variable cost structure for the operating expenditure.

Cost of Assets and IT Service Management

After moving to a variable cost structure, organizations must look at the components of its cost structure. They include the cost of assets and the cost of service or cost of IT service management. Cost of assets show a considerable reduction after moving the entire infrastructure or assets to cloud. The cost of service remains vital as it depends on the day-to-day IT operations and represents the day-after-cloud scenario. The leverage of cloud computing can only be realized if the cost of IT service management is brought down.

Growing ITSM or ITOPs Market and High Stakes

While IT service management (ITSM) has taken a new avatar as IT operations management (ITOM), incident management remains the critical IT support process in every organization. The incident response market is expanding rapidly as more enterprises are moving to cloud every year. According to Markets and Markets, the incident response market is expected to grow to $33.76 billion by the year 2023 from $ 13.38 billion in 2018. The key factors that drive the incident response market are heavy financial losses post incident occurrence, rise in security breaches targeting enterprises and compliance requirements such as the EU’s General Data Protection Regulation (GDPR).

Service fallout or service degradation can impact key business operations. A survey conducted by ITIC indicates that 33 percent of enterprises reported that one hour of downtime could cost them $1 million to more than $5 million.

Cost per IT Ticket: The Least Common Denominator of Cost of ITSM

As organizations have high stakes in ensuring that business services run smooth, IT ops teams have additional responsibility in responding to incidents faster without compromising on the quality of service. The two important metrics for any incident management process are 1) cost per IT ticket and 2) mean time to resolution (MTTR). While cost per ticket impacts the overall operating expenditure, MTTR impacts customer satisfaction. The higher the MTTR, the more time it takes to resolve the tickets and, hence, the lower customer satisfaction.

Cost per ticket is the total monthly operating expenditure of the IT ops team (IT service desk) divided by its monthly ticket volume. According to an HDI study, the average cost per service desk ticket in North America is $15.56. Cost per ticket increases as the ticket gets escalated and moves up the life cycle. For an L3 ticket, the average cost per ticket in North America is about $80-$100+.

Severity vs. Volume of IT Tickets

With our experience in managing the cloud IT ops for our clients, we understand that organizations normally look at the volume and severity of IT tickets. They target the High Severity and High Volume quadrants to reduce the cost of the tickets. However, with our experience, we strongly feel that organizations should start their journey with the low hanging fruits such as the Low Severity tickets, which are repeatable in nature and can be automated using bots.

In the next blog, we will elaborate on this approach that can help organizations in measuring and reducing the cost of IT tickets.


About the Author:

Neeraj Deuskar is the Director and Global Head of Marketing for the Relevance Lab ( Relevance Lab is a DevOps and Automation specialist company- making cloud adoption easy for global enterprises. In his current role, Neeraj is formulating and implementing the global marketing strategy with the key responsibilities of making the brand and the pipeline impact. Prior to his current role, Neeraj managed the global marketing teams for various IT product and services organizations and handled various responsibilities including strategy formulation, product and solutions marketing, demand generation, digital marketing, influencers’ marketing, thought leadership and branding. Neeraj is B.E. in Production Engineering and MBA in Marketing, both from the University of Mumbai, India.

(This blog was originally published in and can be read here: )