The topic of lock-in is very popular. Lock-out should be too!
The present blinds us. We don’t look far enough into the future. We don’t ask ourselves the following question: “What if another opportunity or reality arises in 90 days?” That is the subject of lock-out.
“We are responsible for our data! What happens if we need a certain functionality in three months and we are ‘locked out’ of it? We fear lock-in by a supplier, but forget about lock-out, for which we ourselves are fully responsible,” says Michael Cade, Global Field CTO, Veeam Software. In concrete terms: we fail to keep our options open!”
Locking accounts in reinforced repositories, storage problems such as failures in object lock compliance mode that require specific recovery procedures… That’s lockout. And much more.
Lockout policy
“Data lockout can refer to a sudden blockage of access to data or the configuration of a system, often caused by misconfigured account lockout policies or system issues, but also to the data immutability feature, which prevents data from being modified or deleted for a certain period of time,” explains Michael Cade. Account lockout policies block unauthorized access attempts after a certain number of failed attempts, while the immutability feature—for example, with S3 Object Lock—protects data from ransomware attacks.”
Paradoxically, we think more about lock-in than lock-out. At Veeam, which has focused entirely on the flexibility of the portable data format, lock-in is not an issue. “If, for whatever reason, you decide to stop being a Veeam customer, you benefit from a major advantage: a way out! The customer does not have to pay maintenance fees to restore their workloads and create longer backups.”
Understanding our data
This leads to greater flexibility. And better cost management. For Michael Cade, it’s about looking further ahead. “We’ve broken down the walls of the data centers,” he illustrates. “That means we no longer have to be in the same place, but that our data is spread across multiple platforms. That means we need to ‘understand’ our data, need to know which part of that data is most critical. We still need it, but it’s probably not as important as we think, especially when a problem arises. Still, it’s essential to know where our most crucial data is located so we can locate it perfectly.”
The answer starts with creating the data itself. In other words, managing the data lifecycle. Understanding your data is essential in many ways: how to protect it, how to move it, where it is available, and much more.
Identify redundant data
“Ultimately, everything is connected. And it all starts with the application itself. You always have to start with the application, whether it runs on Kubernetes, on a virtual machine, or on a physical server; it has to be there, whether you developed it yourself or bought it. Then you have to run it somewhere: that’s infrastructure lifecycle management. This can be built manually. Or maybe we use IaC (Infrastructure-as-Code) or other automation tools. Then it’s about managing the continuous lifecycle.“
According to Michael Cade, it’s a shame that we don’t have access to new functionalities in general. This also applies to databases. ”If you choose Amazon RDS, you are dependent on Amazon’s updates to this database. That could be Postgres 16 or 17. And if the update doesn’t come quickly enough, you’ll miss out on the new functionalities of the new version of Postgres if you choose to stay with Amazon RDS instead of using your own version. Of course, you’ll then have to manage this version more than the RDS version.”
No longer needed? Then get rid of it!
The most important thing is that we can work together, evolve, and investigate redundancy. In other words: identify redundant, outdated, or trivial data. And answer the question “Do we still need this?”
“This leads to a healthier approach to data: if we no longer need it, we’re better off getting rid of it, because storing it is likely to be expensive. Then make sure that the data you keep is of the highest quality, that it’s stored on the most high-performance storage, that it’s protected, and that it offers optimal resilience. Then you’re good to go!”