The data storage industry is undergoing a major transformation driven by many factors, including the need for security, speed, efficiency, and cost reduction. Computer research firm Gartner recently predicted a 23-fold growth in petabytes shipped through 2030, a trajectory that promises to completely reshape and redefine data centers and operations IT industry today. To stay on top of the hosting game, keep a close eye on these eight trends.
DNA Storage
DNA, when used as a data storage medium, promises much larger capacity and more flexible storage environments than traditional storage architectures. DNA storage enables data storage at the molecular level, storing information directly into DNA molecules.
“The advantage of DNA-based data storage is its density and stability,” said Nick Heudecker, a former Gartner analyst. “One gram of DNA can store about 215 petabytes of data with a minimum lifespan of 500 years.” However, do not expose the vehicle to direct sunlight, as UV will break down the DNA.
However, it is important to note that this is a long-term trend. Although DNA storage is evolving rapidly, DNA carriers are not expected to become mainstream for some time. There is currently no specific timeline for DNA storage capabilities, although some are optimistic it could be commercialized by the end of the decade.
“Current DNA sequencing and synthesis technologies are too expensive and slow to compete with traditional [storage] infrastructures,” said Heudecker. Access latency remains high, now measured in minutes or hours with a maximum write speed of kilobits per second. “A DNA drive that competes with tape storage must support gigabits per second write speeds,” he notes. Achieving such speeds would require DNA synthesis, the writing process, to become six times faster. DNA sequencing, the reading process, needs to be two to three times faster.
Even if the access latency and throughput issues can be successfully resolved, there is still a significant cost hurdle to overcome. “Magnetic tape storage media costs about $16 to $20 per terabyte,” says Heudecker. The cost of DNA synthesis and sequencing hovers around $800 million per terabyte.
Storage security
All businesses pay attention to cybersecurity, but many businesses neglect the complete security of their data, both at rest and on the go. “Today, many organizations share data repositories between their on-premises data centers and cloud environments,” said Cindy LaChapelle, senior advisor at technology research and consulting firm ISG. public or private cloud”. “In the age of ransomware, it’s important to invest in creating isolated backups of data so that data copies become inaccessible in the event of a major breach.” Airspace means using a standalone computer that is not connected to any kind of
network.
Scott Reder, senior storage specialist at digital transformation consulting firm AHEAD, said he sees a growing interest in adding and improving cyber resilience. WORM (Write-Once, Read-Many) technology, developed many years ago to meet the needs of financial institutions that comply with U.S. Securities and Exchange Commission regulations, is currently being Companies in healthcare and many other fields apply to prevent data tampering. As a result, tools like NetApp SnapLock and Dell OneFS SmartLock have found new life due to increasing cyber threats, Reder said.
To protect the main file/NAS storage, real-time scanning is provided by products such as Superna Ransomware Defender for Dell OneFS and NetApp Cloud Insights with Cloud Secure for ONTAP, Reder said. For users of block storage, multifactor and/or protected snapshots are available to protect critical data.
As storage security tools mature, businesses are working harder to deploy storage products with built-in security features that complement broader enterprise security initiatives. such as adopting ‘trustless network access (ZTNA), to protect corporate data.
SSD data reduction
Data capacity reduction is the process of reducing the amount of space required to store data. This technology can increase storage efficiency and reduce costs. Data reduction techniques, such as compression and deduplication, have been applied to many types of storage systems, but are not yet widely available to SSDs.
To ensure reliability, the compression must be lossless, a factor that has challenged SSD manufacturers. “Many all-flash storage array manufacturers offer inline compression options, but the technology is usually owned by the storage provider,” explains LaChapelle. She noted that this situation will improve in the near future as SSD vendors strive to provide maximum capacity at the lowest possible price.
Additionally, in addition to compression, SSD vendors are now turning to the PCI-Express 4.0 specification for improved bandwidth, including faster read and write speeds.
More insights into the public cloud Mapping and modeling data usage across the entire enterprise application landscape is key to understanding how public cloud storage will ultimately be leveraged. LaChapelle notes that since public cloud storage solutions typically charge for entry and exit, as well as for data transfers between regions and regions, being able to predict data migration is difficult. is important for cost-effective and efficient management of public repositories. Unplanned conversations between on-premises data centers and public cloud data repositories can create performance issues due to latency. “It is best to fully understand what this means before applications with co-dependencies are distributed between public cloud and on-premises environments,” she advises. Hosting providers have increased their scanning capabilities. HPE InfoSight, NetApp ActiveIQ, and Pure Storage Pure1 Meta are some of the tools companies can use to get more comprehensive storage insights.
Object Storage
The world of storage is undergoing a change driven by cloud-native applications including databases, analytics, data warehousing, artificial intelligence, and machine language technologies. “These applications drive the use of object storage as primary storage,” said David Boland, vice president of cloud strategy at cloud storage provider Wasabi. Boland notes that there are three main types of storage: objects, blocks, and files. “Object storage is unique in providing low cost and high performance at exabyte scale,” he commented. Boland adds that a recent IDC survey found that 80% of respondents believe object storage can support their core IT initiatives, including IoT, reporting and analysis.
Object storage has been widespread since the early 2000s, says Boland, but only in the past two years has a hybrid storage system included a combination of NVMe SSD performance improvements and price points. significantly lower has made large-scale deployment economically viable, Boland said.
Performance is no longer a drawback of object storage. Initially, object storage tends to be slower than the file or block approach when locating data. This is no longer the case. Boland says the high-performance metadata database, database engine, and NVMe SSD deliver the performance needed for busy structured content applications like databases.
Immutable Backup
Immutable backup technology is attracting the interest of more and more businesses, both financial and legal, and for good reason. “Immutable means ‘unchangeable’,” explains Chris Karounos, SAN administrator at IT provider SHI International. data is fixed, immutable, and never deleted, encrypted or modified.
Immutable storage can be applied to disk, SSD, and tape media, as well as cloud storage. Immutable storage is simple and convenient: the user simply creates a file that incorporates the desired immutability policy. “Immutable backups are the only way to be 100% protected against any kind of backup deletion or modification,” says Karounos. “In an increasingly fast-paced business environment where threats are constantly evolving, immutable backups are game backups.”
Time series database technology
A Time Series Database (TSDB) is designed to support high-speed data reading and writing. Jesse White, CTO at OpenNMS Group, an open source network monitoring and management platform provider, says TSDB opens up new levels of flexibility offered by existing object storage solutions. “Specifically, the storage and index layouts of these TSDBs have been intelligently designed to take advantage of the scalability, resiliency, and low cost associated with object storage while minimizing the impact of latency.” TSDB runs on object storage intended for enterprises, managed service providers, and other organizations that collect large volumes of time series data for observation and/or analysis.
Stable versions of TSDB that can take advantage of object storage, such as Cortex, Mimir, and InfluxDB IOx, are available. White notes: “The object storage solutions they depend on are ubiquitous across all major cloud providers, and open source solutions like MinIO and Ceph provide interoperable APIs.
White reports that although TSDBs that leverage object storage tend to support multiple APIs, object storage APIs have not yet been standardized. He added: “Applications may need to adapt to the deployed solution.
Simplify storage
Tong Zhang, a professor in the Department of Electrical, Computer and Systems Engineering at Rensselaer Polytechnic Institute, believes that the hottest trend in storage is the need for less storage. Zhang, who is also chief scientist at storage technology company ScaleFlux, said the idea that storage is cheap, so keep it all, no longer holds true. “The aggregate costs of hosting are now taking their toll,” he notes.
Zhang believes that data is accumulating faster than companies can deploy data center architectures. “We need to focus our energy to be efficient, and several strategies can be used simultaneously, including metadata processing to reduce payload, pre-filtering to reduce network congestion, compression features transparent built-in disk and increase capacity density without reducing CPU weight,” he said.