Quantcast
Channel: 100% Solutions: robotics
Viewing all articles
Browse latest Browse all 3882

Genome sequencing: unveiling the data behind the science

$
0
0

High-performance computing is increasingly important for the continued progression of modern bioinformatics Posted by Ben RossiRelated topicsData Networks Storage Systems Print this pageEmail article High-performance computing (HPC) has always had a significant role to play within computational science. Its ability to tackle compute-intensive tasks from weather forecasting, to molecular modelling makes it the ideal solution for delivering complex, fast-paced data analytics and insights. One area set to boom in the coming years is genome research. By 2025, it has been predicted that between 100 million and as many as two billion human genomes are likely to be sequenced. The result is an incredible amount of data that must be stored, processed, analysed and shared, unlocking its value to the bioinformatics industry. Scientific advances in the genome space are an inevitable result of the increasing focus on sequencing this type of data – apparent in recent developments made by a number of medical centres and organisations. >See also: How one hospital is using big data to save lives But, why exactly is HPC so adept at dealing with genome datasets, how is it aiding such developments, and what does this growing trend mean for the data centre and computing industries? Data cocktail With more than 6.4 billion bases in a person’s DNA, it’s easy to understand why there’s an escalating demand for HPC and high-performance storage in this area. Simply put: the amount of data is vast. Genome data is also incredibly complex. When it’s processed, examined and tested (for scientific benefit) the genomic analyses that result are even more so. Added to this, the analysis process often involves comparing new data against multiple, large-scale external datasets, or integrating it with data from other sources (i.e. public records, other collaborators and research partners).  The resulting cocktail of bioinformatics information generates massive amounts of data, which needs large computer processing and storage capacity to translate it from 1s and 0s into meaningful and usable insights. The solution is to match these complex data and processing requirements with complex compute capabilities – and that’s where HPC comes in. High-performance progression The uptake of HPC in the bioinformatics sector is already proving incredibly beneficial to medical and genome communities, delivering efficiencies on a number of levels. Pharmaceutical giant GSK is one example of a company leveraging HPC’s accelerating impact on genome research. The firm has previously claimed that its HPC facility has helped reduce gene-gene interaction analysis (where multiple genes interact with and modify each other) for a set of 36,000 genes to just 20 minutes. The same process could take up to 25 days on more traditional compute infrastructure. HPC is also driving significant cost efficiencies too. According to the National Human Genome Research Institute, ten years ago the typical cost to generate a high-quality ‘draft’ human genome sequence was around $14 million. Thanks to advances in technology and compute, by late 2015 this figure had fallen dramatically to a negligible $1,500 – a drop that has undoubtedly freed up funds for use in other medical innovation areas. The Center for Paediatric Genomic Medicine at The Children’s Mercy Hospital in Kansas City is another example. The hospital uses HPC and advanced technologies to aid the diagnosis of rare paediatric diseases in its patients. The ability to run genome sequences quickly and efficiently has directly translated into faster diagnosis, treatment and better provision for those in the Center’s care. A new home for DNA data   So, as more life sciences organisations turn to HPC to process their large data sets, demand is growing for scalable and secure data centre solutions that can deal with their HPC requirements. For some, the solution is within reaching distance – but as the pressure of increasing amounts of data and the need for higher compute rises, even research centres with incredibly large computational platforms are feeling the strain. Where this is the case, IT decision makers are looking to external data centre providers to help support HPC operations, by supplementing compute capacity and improving operational costs. >See also: Capitalising on the power of big data These colocation data centre providers must therefore present a solution: campuses that are ‘genome-ready’. These are facilities with the power infrastructure, resiliency levels and computing resources needed to drive HPC loads cost-effectively. Moving data to remote campuses that benefit from these qualities – often found in regions like the Nordics – provides research centres the medium and high-power computing density required at significantly lower energy costs. Thanks to its location and proximity to 100% renewable power resources, the Nordics is not only ideally designed to facilitate the needs of bioinformatics centres, but all companies requiring HPC to process their data. It should be expected that these near-Arctic locations may soon become the pre-eminent destinations for establishing the genome- and HPC-ready data centres this sector needs. This is a trend the data centre industry must be prepared for. But wherever bioinformatics data is stored, analysed and understood, one thing is for sure: HPC is the key to advancing our understanding of genome complexities, and unlocking the medical innovations hidden in the data of our DNA. Sourced from Jorge Balcells, director of technical services, Verne Global Next ArticleEight is the magic number for a speedy network Previous ArticleDigital technologies set to revolutionise asthma care

Viewing all articles
Browse latest Browse all 3882

Trending Articles