Techh.info/techtechnology hourly

Supercomputers

headlines texts
05.05.2020
23:01 ScienceDaily.com Supercomputer simulations present potential active substances against coronavirus

Several drugs approved for treating hepatitis C viral infection were identified as potential candidates against COVID-19, the disease caused by the SARS-CoV-2 coronavirus. This is the result of research based on extensive calculations using the MOGON II supercomputer at Johannes Gutenberg University Mainz (JGU).

Скрыть анонс
15:03 Phys.org Supercomputer simulations present potential active substances against coronavirus

Several drugs approved for treating hepatitis C viral infection were identified as potential candidates against COVID-19, a new disease caused by the SARS-CoV-2 coronavirus. This is the result of research based on extensive calculations using the MOGON II supercomputer at Johannes Gutenberg University Mainz (JGU). One of the most powerful computers in the world, MOGON II is operated by JGU and the Helmholtz Institute Mainz.

Скрыть анонс
22.04.2020
05:54 Arxiv.org CS Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scientific Computing. (arXiv:2004.09492v1 [cs.PF])

Scientific computing needs are growing dramatically with time and are expanding in science domains that were previously not compute intensive. When compute workflows spike well in excess of the capacity of their local compute resource, capacity should be temporarily provisioned from somewhere else to both meet deadlines and to increase scientific output. Public Clouds have become an attractive option due to their ability to be provisioned with minimal advance notice. The available capacity of cost-effective instances is not well understood. This paper presents expanding the IceCube's production HTCondor pool using cost-effective GPU instances in preemptible mode gathered from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and the Google Cloud Platform. Using this setup, we sustained for a whole workday about 15k GPUs, corresponding to around 170 PFLOP32s, integrating over one EFLOP32 hour worth of science output for a price tag of about $60k. In this

Скрыть анонс
20.04.2020
19:33 Phys.org Supercomputers and Archimedes' principle enable calculating nanobubble diffusion in nuclear fuels

Researchers from the Moscow Institute of Physics and Technology have proposed a method that speeds up the calculation of nanobubble diffusion in solid materials. This method makes it possible to create significantly more accurate fuel models for nuclear power plants. The paper was published in the Journal of Nuclear Materials.

Скрыть анонс
17.04.2020
04:47 News-Medical.Net Supercomputer helps find 64 compounds as potential inhibitors of the COVID-19 protease

A new study published in Chemrxiv in April 2020 describes the identification of 64 compounds that could potentially be inhibitors of replication of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) that is causing the current COVID-19 disease pandemic. The research was carried out using a pharmacophore model of an essential viral enzyme, mining data from the Food and Drug Administration (FDA) database of conformations of approved drugs.

Скрыть анонс
16.04.2020
15:46 ExtremeTech.com Folding@Home Now More Powerful Than All the Supercomputers on Earth

Credit: NIH By your powers combined: The Folding@Home distributed computer network is now more powerful than the entire Top 500 supercomputing network. Together. The post Folding@Home Now More Powerful Than All the Supercomputers on Earth appeared first on ExtremeTech.

Скрыть анонс
14.04.2020
06:23 Arxiv.org CS Software-Defined Network for End-to-end Networked Science at the Exascale. (arXiv:2004.05953v1 [cs.NI])

Domain science applications and workflow processes are currently forced to view the network as an opaque infrastructure into which they inject data and hope that it emerges at the destination with an acceptable Quality of Experience. There is little ability for applications to interact with the network to exchange information, negotiate performance parameters, discover expected performance metrics, or receive status/troubleshooting information in real time. The work presented here is motivated by a vision for a new smart network and smart application ecosystem that will provide a more deterministic and interactive environment for domain science workflows. The Software-Defined Network for End-to-end Networked Science at Exascale (SENSE) system includes a model-based architecture, implementation, and deployment which enables automated end-to-end network service instantiation across administrative domains. An intent based interface allows applications to express their high-level service

Скрыть анонс
08.04.2020
05:06 News-Medical.Net Supercomputer Fugaku will be used to help combat COVID-19 pandemic

The supercomputer Fugaku, which is currently being installed in Kobe, Japan under a RIKEN-led project, will be put to use to help combat the COVID-19 pandemic, by giving priority to research selected by the Japanese Ministry of Education, Culture, Sports, Science and Technology.

Скрыть анонс
07.04.2020
23:43 Ixbt.com AMD не забросила идею создания монструозного гибридного процессора Exascale Heterogeneous Processor

Летом 2015 года, примерно за два года до выхода на рынок первых процессоров AMD на архитектуре Zen, в Сети появилась информация о том, что компания разрабатывает серверный гибридный процессор Exascale Heterogeneous Processor (EHP). Подробностей о нём было немного, но говорилось о 32 ядрах, большом GPU и памяти HBM2. Как мы понимаем, ничего подобного AMD не выпустила. Серверные процессоры Epyс первого поколения действительно имели 32 ядра, но сейчас флагманы содержат уже 64 ядра. Впрочем, EHP выглядит впечатляюще даже сейчас, ведь CPU Epyc не имеют ни GPU, ни собственной памяти HBM2. Как оказалось, работа над EHP, судя по всему, всё ещё ведётся. На это указывают множественные свежие патенты AMD. Они касаются различных технологических решений, а не продукта в целом, но всё указывает на то, что идею создать Exascale Heterogeneous Processor компания AMD не забросила. К тому же частично компания уже реализовала некоторые задумки, изначально приписываемые EHP. К примеру,

Скрыть анонс
13:56 WhatReallyHappened.com Scientists are using the world's most powerful supercomputers to speed up the development of treatments for the deadly coronavirus

Supercomputers around the world are being used to speed up the search for a treatment for the deadly coronavirus that has put the world in lockdown. Researchers from University College London say the powerful machines can process information in days that would take a regular computer months to compute.

Скрыть анонс
06.04.2020
22:39 ScientificAmerican.Com Inside the Global Race to Fight COVID-19 Using the World's Fastest Supercomputers

The director of IBM Research explains how the COVID-19 High Performance Computing Consortium came together in just a few days -- Read more on ScientificAmerican.com

Скрыть анонс
08:12 Zdnet.com Got an idea for dealing with COVID-19? A Taiwanese supercomputer could help

The National Center for High-Performance Computing will make its supercomputer available to researchers for six months.

Скрыть анонс
03.04.2020
12:57 Technology.org Coronavirus simulations completed on supercomputer

Scientists are preparing a massive computer model of the coronavirus they expect will provide new insights into how

Скрыть анонс
02.04.2020
14:52 Technology.org Preparing for exascale: Eliminating disruptions on the path to sustainable fusion energy

With the world’s most powerful path-to-exascale supercomputing resources at their disposal, William Tang and colleagues are combining computer

Скрыть анонс
30.03.2020
18:31 Zdnet.com Coronavirus: From startups to supercomputers, how tech is trying to help tackle COVID-19

Governments around the world have called on the help of the technology industry. Here are a few of the projects in the works.

Скрыть анонс
27.03.2020
15:13 ExtremeTech.com Folding@Home Crushes Exascale Barrier, Now Faster Than Dozens of Supercomputers

Credit: Tnguyen2791/CC0 1.0 The Folding @ Home network continues its inescapable march to world domination. Today, the exaflop. Tomorrow, the world. The post Folding@Home Crushes Exascale Barrier, Now Faster Than Dozens of Supercomputers appeared first on ExtremeTech.

Скрыть анонс
25.03.2020
15:36 Technology.org Coronavirus Massive Simulations Completed on Frontera Supercomputer

New simulations can help researchers design new drugs and vaccines to combat the coronavirus Scientists are preparing a

Скрыть анонс
07:45 News-Medical.Net Supercomputer helps benchmark new molecular docking tool

With the American Cancer Society estimating 1.76 million new cases and more than 600,000 deaths during 2019 in the U.S. alone, cancer remains a critical healthcare challenge.

Скрыть анонс
24.03.2020
18:35 WhatReallyHappened.com Gamers Saving The World: Gamers, Companies Create Virtual Supercomputer Researching Ways To Fight Coronavirus

Скрыть анонс
18:29 ScienceDaily.com Coronavirus massive simulations completed on Frontera supercomputer

A coronavirus envelope all-atom computer model is being developed. The coronavirus model builds on success of all-atom infuenza virus simulations. Molecular dynamics simulations for the coronavirus model tests ran on up to 4,000 nodes, or about 250,000 of Frontera's processing cores. Full model can help researchers design new drugs, vaccines to combat the coronavirus.

Скрыть анонс
23.03.2020
14:59 ExtremeTech.com Folding@Home Now Faster Than World’s Top 7 Supercomputers Combined

Credit: NIH Computing enthusiasts have driven a tremendous surge in the Folding@Home network's processing power. The distributed computing network is now more powerful than the top seven supercomputers combined. The post Folding@Home Now Faster Than World’s Top 7 Supercomputers Combined appeared first on ExtremeTech.

Скрыть анонс
01:27 CNBC technology IBM and White House to deploy supercomputer power to fight coronavirus outbreak

IBM is partnering with the White House to make a vast amount of supercomputing power available to help researchers stop the spreading coronavirus pandemic, according to the Trump administration.

Скрыть анонс
01:27 CNBC health care IBM and White House to deploy supercomputer power to fight coronavirus outbreak

IBM is partnering with the White House to make a vast amount of supercomputing power available to help researchers stop the spreading coronavirus pandemic, according to the Trump administration.

Скрыть анонс
01:27 CNBC top news IBM and White House to deploy supercomputer power to fight coronavirus outbreak

IBM is partnering with the White House to make a vast amount of supercomputing power available to help researchers stop the spreading coronavirus pandemic, according to the Trump administration.

Скрыть анонс
20.03.2020
20:51 ExtremeTech.com IBM Supercomputer Identifies 77 Compounds That Could Fight Coronavirus

The Summit supercomputer came online several years ago with more computing power than any other non-distributed system. The US Department of Energy announced earlier this month that it would turn the system's massive computing power toward the COVID-19 pandemic. The machine has been crunching the numbers, and it has now identified 77 chemical compounds that could help stop coronavirus.  The post IBM Supercomputer Identifies 77 Compounds That Could Fight Coronavirus appeared first on ExtremeTech.

Скрыть анонс
19:38 CNN Health The world's fastest supercomputer identified chemicals that could stop coronavirus from spreading, a crucial step toward a vaccine

The novel coronavirus presents an unprecedented challenge for scientists: The speed at which the virus spreads means they must accelerate their research.

Скрыть анонс
07:50 News-Medical.Net World’s fastest supercomputer identifies chemicals with potential to stop COVID-19

What makes the coronavirus disease (COVID-19) global pandemic different from the Spanish flu in 1918? In 1918, transportation, technology, and science were in their infancy, with the medical field not capable of fighting outbreaks. Technology has come a long way, within days, scientists have already identified the type of virus spreading in China, and three months after, vaccine trials have started.

Скрыть анонс
19.03.2020
16:43 Technology.org Even The Smallest Problems Need The Biggest Supercomputers

Oden Institute’s Feliciano Giustino applies TACC’s supercomputing power to the development of novel materials at the quantum scale.

Скрыть анонс
16:29 Technology.org Supercomputers Unlock Reproductive Mysteries of Viruses and Life

Stampede2 and Comet systems complete simulations pertinent to coronavirus, DNA replication Fundamental research supported by supercomputers could help

Скрыть анонс
18.03.2020
23:14 ScienceDaily.com Supercomputers unlock reproductive mysteries of viruses and life

Supercomputer simulations support a new mechanism for the budding off of viruses like the coronavirus. ESCRTIII polymer features clear intrinsic twist in molecular dynamics simulations, might play major role in creating three-dimensional buckling of the cell membrane. Related study used simulations to find mechanism for DNA base addition during replication.

Скрыть анонс
19:16 Phys.org Supercomputers unlock reproductive mysteries of viruses and life

Fundamental research supported by supercomputers could help lead to new strategies and better technology that combats infectious and genetic diseases.

Скрыть анонс
14.03.2020
19:04 CNBC top news Next-gen supercomputers are fast-tracking treatments for the coronavirus in a race against time

Scientists are using IBM's Summit, the world's fastest supercomputer, to help find promising candidate drugs to fight the coronavirus epidemic.

Скрыть анонс
12.03.2020
12:23 Phys.org Supercomputer helps in tracking East Africa locust outbreak

A supercomputer is boosting efforts in East Africa to control a locust outbreak that raises what the U.N. food agency calls "an unprecedented threat" to the region's food security.

Скрыть анонс
11.03.2020
00:23 ExtremeTech.com The Fastest Supercomputer on Earth Is Being Deployed Against Coronavirus

Credit: Carlos Jones/ORNL, CC BY 2.0 The world's most powerful supercomputer is being dedicated to hunting down a treatment for coronavirus as Covid-19 cases continue to rise. The post The Fastest Supercomputer on Earth Is Being Deployed Against Coronavirus appeared first on ExtremeTech.

Скрыть анонс
10.03.2020
10:33 Zdnet.com IBM Summit supercomputer joins fight against COVID-19

Oak Ridge National Laboratory says early research on existing drug compounds via supercomputing could combat coronavirus.

Скрыть анонс
05.03.2020
17:11 ExtremeTech.com AMD, HP Unveil 2-Exaflop Supercomputer With Epyc, Radeon Instinct

AMD will provide the CPUs and GPUs to power a two-exaflop supercomputer for Lawrence Livermore National Laboratories in early 2023. The machine is expected to be 10x faster than today's most powerful supercomputer. The post AMD, HP Unveil 2-Exaflop Supercomputer With Epyc, Radeon Instinct appeared first on ExtremeTech.

Скрыть анонс
14:58 Technology.org Supercomputers Drive Ion Transport Research

For a long time, nothing. Then all of a sudden, something. Wonderful things in nature can burst on

Скрыть анонс
04.03.2020
22:26 Zdnet.com HPE taps AMD GPUs and CPUs for the El Capitan supercomputer

The system, which will help protect the US nuclear stockpile, will deliver performance greater than two exaflops, HPE now says -- 10X faster than today's most powerful supercomputer.

Скрыть анонс
12:22 Phys.org Supercomputers drive ion transport research

For a long time, nothing. Then all of a sudden, something. Wonderful things in nature can burst on the scene after long periods of dullness—rare events such as protein folding, chemical reactions, or even the seeding of clouds. Path sampling techniques are computer algorithms that deal with the dullness in data by focusing on the part of the process in which the transition occurs.

Скрыть анонс
02.03.2020
04:15 Zdnet.com Pawsey's new AU$2m HPE supercomputer to support Square Kilometre Array

The new system is expected to provide a dedicated system for astronomers to process in excess of 30 petabytes of data from the Murchison Widefield Array radio telescope.

Скрыть анонс
25.02.2020
08:53 Arxiv.org CS Optimizing High Performance Markov Clustering for Pre-Exascale Architectures. (arXiv:2002.10083v1 [cs.DC])

HipMCL is a high-performance distributed memory implementation of the popular Markov Cluster Algorithm (MCL) and can cluster large-scale networks within hours using a few thousand CPU-equipped nodes. It relies on sparse matrix computations and heavily makes use of the sparse matrix-sparse matrix multiplication kernel (SpGEMM). The existing parallel algorithms in HipMCL are not scalable to Exascale architectures, both due to their communication costs dominating the runtime at large concurrencies and also due to their inability to take advantage of accelerators that are increasingly popular. In this work, we systematically remove scalability and performance bottlenecks of HipMCL. We enable GPUs by performing the expensive expansion phase of the MCL algorithm on GPU. We propose a CPU-GPU joint distributed SpGEMM algorithm called pipelined Sparse SUMMA and integrate a probabilistic memory requirement estimator that is fast and accurate. We develop a new merging algorithm for the

Скрыть анонс
24.02.2020
16:24 CNN Drones. Disinfecting robots. Supercomputers. How China rallies tech industry to fight coronavirus

China has spent decades nurturing its tech sector. Now, faced with a massive public health crisis, Beijing is pushing its tech companies to join the fight against the novel coronavirus.

Скрыть анонс
20.02.2020
08:07 Arxiv.org CS Honing and proofing Astrophysical codes on the road to Exascale. Experiences from code modernization on many-core systems. (arXiv:2002.08161v1 [cs.DC])

The complexity of modern and upcoming computing architectures poses severe challenges for code developers and application specialists, and forces them to expose the highest possible degree of parallelism, in order to make the best use of the available hardware. The Intel$^{(R)}$ Xeon Phi$^{(TM)}$ of second generation (code-named Knights Landing, henceforth KNL) is the latest many-core system, which implements several interesting hardware features like for example a large number of cores per node (up to 72), the 512 bits-wide vector registers and the high-bandwidth memory. The unique features of KNL make this platform a powerful testbed for modern HPC applications. The performance of codes on KNL is therefore a useful proxy of their readiness for future architectures. In this work we describe the lessons learnt during the optimisation of the widely used codes for computational astrophysics P-Gadget-3, Flash and Echo. Moreover, we present results for the visualisation and analysis tools

Скрыть анонс
07:42 Arxiv.org Physics Honing and proofing Astrophysical codes on the road to Exascale. Experiences from code modernization on many-core systems. (arXiv:2002.08161v1 [cs.DC])

The complexity of modern and upcoming computing architectures poses severe challenges for code developers and application specialists, and forces them to expose the highest possible degree of parallelism, in order to make the best use of the available hardware. The Intel$^{(R)}$ Xeon Phi$^{(TM)}$ of second generation (code-named Knights Landing, henceforth KNL) is the latest many-core system, which implements several interesting hardware features like for example a large number of cores per node (up to 72), the 512 bits-wide vector registers and the high-bandwidth memory. The unique features of KNL make this platform a powerful testbed for modern HPC applications. The performance of codes on KNL is therefore a useful proxy of their readiness for future architectures. In this work we describe the lessons learnt during the optimisation of the widely used codes for computational astrophysics P-Gadget-3, Flash and Echo. Moreover, we present results for the visualisation and analysis tools

Скрыть анонс
18.02.2020
12:48 Zdnet.com The UK's solution to violent storms? A billion-dollar supercomputer

The UK government has just splashed out £1.2 billion ($1.5 billion) to build a supercomputer that can predict extreme weather with greater accuracy.

Скрыть анонс
06:18 Arxiv.org CS Running a Pre-Exascale, Geographically Distributed, Multi-Cloud Scientific Simulation. (arXiv:2002.06667v1 [cs.DC])

As we approach the Exascale era, it is important to verify that the existing frameworks and tools will still work at that scale. Moreover, public Cloud computing has been emerging as a viable solution for both prototyping and urgent computing. Using the elasticity of the Cloud, we have thus put in place a pre-exascale HTCondor setup for running a scientific simulation in the Cloud, with the chosen application being IceCube's photon propagation simulation. I.e. this was not a purely demonstration run, but it was also used to produce valuable and much needed scientific results for the IceCube collaboration. In order to reach the desired scale, we aggregated GPU resources across 8 GPU models from many geographic regions across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform. Using this setup, we reached a peak of over 51k GPUs corresponding to almost 380 PFLOP32s, for a total integrated compute of about 100k GPU hours. In this paper we provide the description of the

Скрыть анонс
17.02.2020
07:45 Telegraph.co.uk Supercomputer to improve weather forecasts to receive £1.2 billion funding from the Government

Скрыть анонс
05.02.2020
12:40 Technology.org Oguz uses ACCRE supercomputer daily for medical image analysis

For seventeen years, Vanderbilt students and researchers have analyzed data with a method much faster than any normal

Скрыть анонс
04.02.2020
08:22 Arxiv.org Physics Implementing a neural network interatomic model with performance portability for emerging exascale architectures. (arXiv:2002.00054v1 [physics.comp-ph])

The two main thrusts of computational science are more accurate predictions and faster calculations; to this end, the zeitgeist in molecular dynamics (MD) simulations is pursuing machine learned and data driven interatomic models, e.g. neural network potentials, and novel hardware architectures, e.g. GPUs. Current implementations of neural network potentials are orders of magnitude slower than traditional interatomic models and while looming exascale computing offers the ability to run large, accurate simulations with these models, achieving portable performance for MD with new and varied exascale hardware requires rethinking traditional algorithms, using novel data structures, and library solutions. We re-implement a neural network interatomic model in CabanaMD, an MD proxy application, built on libraries developed for performance portability. Our implementation shows significantly improved on-node scaling in this complex kernel as compared to a current LAMMPS implementation, across

Скрыть анонс
03.02.2020
21:46 ScienceDaily.com Supercomputers help link quantum entanglement to cold coffee

Theoretical physicists have found a deep link between one of the most striking features of quantum mechanics -- quantum entanglement -- and thermalization, which is the process in which something comes into thermal equilibrium with its surroundings.

Скрыть анонс
18:07 SingularityHub.Com Could Photonic Chips Outpace the Fastest Supercomputers?

There’s been a lot of talk about quantum computers being able to solve far more complex problems than conventional supercomputers. The authors of a new paper say they’re on the path to showing an optical computer can do so, too. The idea of using light to carry out computing has a long pedigree, and it […]

Скрыть анонс
08:50 Technology.org U.S. Department of Energy’s Argonne Leadership Computing Facility (ALCF) and HPE Expand High-Performance Computing (HPC) Storage Capacity for Exascale

Hewlett Packard Enterprise (HPE) and the Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science

Скрыть анонс
02.02.2020
16:17 WhatReallyHappened.com Mathematicians Solve an Enduring ’42’ Problem Using Planetary Supercomputer

Скрыть анонс
31.01.2020
20:16 Phys.org How supercomputers are helping us link quantum entanglement to cold coffee

Theoretical physicists from Trinity College Dublin have found a deep link between one of the most striking features of quantum mechanics—quantum entanglement—and thermalisation, which is the process in which something comes into thermal equilibrium with its surroundings.

Скрыть анонс
18.01.2020
16:07 ScientificAmerican.Com Supercomputer Scours Fossil Record for Earth's Hidden Extinctions

Paleontologists have charted 300 million years of Earth’s history in breathtaking detail -- Read more on ScientificAmerican.com

Скрыть анонс
01:48 Nature.Com Daily briefing: A supercomputer is mining fossil records to uncover unknown extinctions

Nature is the international weekly journal of science: a magazine style journal that publishes full-length research papers in all disciplines of science, as well as News and Views, reviews, news, features, commentaries, web focuses and more, covering all branches of science and how science impacts upon all aspects of society and life.

Скрыть анонс
16.01.2020
22:17 Nature.Com Supercomputer scours fossil record for Earth’s hidden extinctions

Скрыть анонс
14.01.2020
20:44 Zdnet.com This new supercomputer promises faster and more accurate weather forecasts

New hardware will support hundreds of researchers working on medium and long range forecasting.

Скрыть анонс
09.01.2020
16:00 Phys.org Researchers simulate quantum computer with up to 61 quantum bits using a supercomputer with data compression

When trying to debug quantum hardware and software with a quantum simulator, every quantum bit (qubit) counts. Every simulated qubit closer to physical machine sizes halves the gap in computing power between the simulation and the physical hardware. However, the memory requirement of full-state simulation grows exponentially with the number of simulated qubits, and this limits the size of simulations that can be run.

Скрыть анонс
07.01.2020
05:32 Arxiv.org CS Finally, how many efficiencies the supercomputers have? And, what do they measure?. (arXiv:2001.01266v1 [cs.PF])

In the "gold rush" for higher performance numbers, a lot of confusion was introduced in supercomputing. The present paper attempts to clear up the terms through scrutinizing the basic terms, contributions, measurement methods. It is shown that using extremely large number of processing elements in computing systems leads to unexpected phenomena, that cannot be explained in the frame of the classical computing paradigm. The phenomena show interesting parallels with the phenomena experienced in science more than a century ago and through their studying a modern science was introduced. The introduced simple non-technical model enables to set up a frame and formalism enabling to explain the unexplained experiences around supercomputing. The model also enables to derive predictions of supercomputer performance for the near future as well as provides hints for enhancing supercomputer components.

Скрыть анонс
30.12.2019
09:36 Technology.org Argonne’s Mira supercomputer set to retire after years of enabling groundbreaking science

Mira, the 10-petaflop IBM Blue Gene/Q supercomputer first booted up at the U.S. Department of Energy’s (DOE) Argonne National Laboratory in 2012,

Скрыть анонс
19.12.2019
00:38 News-Medical.Net Scientists use supercomputers to shed light on skin cancer formation mechanism

Skin cancer, particularly melanoma, which is the deadliest and most serious type of human skin cancer, begins as a small lesion or blemish. Usually, these blemishes start off as harmless accumulation of melanocytes, which give the skin its color. As the disease progresses, it can spread throughout the body.

Скрыть анонс
17.12.2019
05:31 Arxiv.org CS Optimal Multi-Level Interval-based Checkpointing for Exascale Stream Processing Systems. (arXiv:1912.07162v1 [cs.DC])

State-of-the-art stream processing platforms make use of checkpointing to support fault tolerance, where a "checkpoint tuple" flows through the topology to all operators, indicating a checkpoint and triggering a checkpoint operation. The checkpoint will enable recovering from any kind of failure, be it as localized as a process fault or as wide spread as power supply loss to an entire rack of machines. As we move towards Exascale computing, it is becoming clear that this kind of "single-level" checkpointing is too inefficient to scale. Some HPC researchers are now investigating multi-level checkpointing, where checkpoint operations at each level are tailored to specific kinds of failure to address the inefficiencies of single-level checkpointing. Multi-level checkpointing has been shown in practice to be superior, giving greater efficiency in operation over single-level checkpointing. However, to date there is no theoretical basis that provides optimal parameter settings for an

Скрыть анонс
12.12.2019
11:42 Arxiv.org Physics Cold numbers: Superconducting supercomputers and presumptive anomaly. (arXiv:1912.05504v1 [physics.hist-ph])

In February 2014 Time magazine announced to the world that the first quantum computer had been put in use. One key component of this computer is the Josephson-junction, a superconducting device, based on completely different scientific and technological principles with respect to semiconductors. The origin of superconductors dates back to the 1960s, to a large-scale 20-year long IBM project aimed at building ultrafast computers. We present a detailed study of the relationship between Science and Technology making use of the theoretical tools of presumptive anomaly and technological paradigms: superconductors were developed whilst the semiconductors revolution was in full swing. We adopt a historiographical approach - using a snowballing technique to sift through the relevant literature from various epistemological domains and technical publications - to extract theoretically robust insights from a narrative which concerns great scientific advancements, technological leaps forward and

Скрыть анонс
10.12.2019
14:47 Zdnet.com IBM's latest supercomputer will be used... to build even more computers

AiMOS, the 24th most powerful supercomputer worldwide, was recently unveiled at the Rensselaer Polytechnic Institute. Its main job? To find out how to build smarter hardware to support ever-more sophisticated applications of AI.

Скрыть анонс
09.12.2019
05:16 Arxiv.org Physics Benchmarking Supercomputers with the J\"ulich Universal Quantum Computer Simulator. (arXiv:1912.03243v1 [quant-ph])

We use a massively parallel simulator of a universal quantum computer to benchmark some of the most powerful supercomputers in the world. We find nearly ideal scaling behavior on the Sunway TaihuLight, the K computer, the IBM BlueGene/Q JUQUEEN, and the Intel Xeon based clusters JURECA and JUWELS. On the Sunway TaihuLight and the K computer, universal quantum computers with up to 48 qubits can be simulated by means of an adaptive two-byte encoding to reduce the memory requirements by a factor of eight. Additionally, we discuss an alternative approach to alleviate the memory bottleneck by decomposing entangling gates such that low-depth circuits with a much larger number of qubits can be simulated.

Скрыть анонс
04.12.2019
15:14 Phys.org Image: A cloudy martian night through the eyes of a supercomputer

As NASA's Curiosity rover makes its way over the surface of Mars, it's sometimes accompanied by clouds drifting by in the sky above. Like Earth, the Red Planet has a water cycle, with water molecules moving between the surface and the air, traveling through the atmosphere and coming together to form clouds. The behavior of water-ice clouds on Mars plays a big role in its climate, and this computer simulation shows them forming and dispersing over the course of a Martian day.

Скрыть анонс
03.12.2019
14:41 Zdnet.com AWS wants to reinvent the supercomputer, starting with the network

VP of AWS global infrastructure Peter DeSantis talked up the possibilities of running high performance computing workloads in the cloud, instead of on a 'costly' supercomputer with a pretty paint job.

Скрыть анонс
27.11.2019
23:22 Zdnet.com Brazilian supercomputer back on Top 500 list after funding boost

After a period of funding struggles, the Santos Dumont complex is now back on the list of the world's best supercomputers.

Скрыть анонс
26.11.2019
15:43 LiveScience.com Watch Clouds on Mars Drift by in Supercomputer Simulations

Weather models are a daily staple of life on Earth, but they can go interplanetary as well, sometimes with a boost from Earth's most sophisticated computers.

Скрыть анонс
25.11.2019
07:05 Arxiv.org Physics Challenges in fluid flow simulations using Exascale computing. (arXiv:1911.10020v1 [physics.comp-ph])

In this paper, I discuss the challenges in porting hydrodynamic codes to futuristic exascale HPC systems. In particular, we describe the computational complexities of finite difference method, pseudo-spectral method, and Fast Fourier Transform (FFT). We show how global data communication among the processors brings down the efficiency of pseudo-spectral codes and FFT. It is argued that FFT scaling may saturate at 1/2 million processors. However, finite difference and finite volume codes scale well beyond million processors, hence they are likely candidates to be tried on exascale systems. The codes based on spectral-element and Fourier continuation, that are more accurate than finite difference, could also scale well on such systems.

Скрыть анонс
18.11.2019
20:05 Zdnet.com The world's fastest supercomputers hit higher speeds than ever with Linux

The new list of the world's fastest computers -- supercomputing's Top 500 -- is out, and every one runs faster than a petaflop using Linux.

Скрыть анонс
16:30 Zdnet.com Intel-Lenovo to launch exascale council with NCI as a founding member

Members of a new council, called Project Everyscale, want to see exascale technology broadly adopted.

Скрыть анонс
14.11.2019
15:37 CNBC top news IBM hopes to change weather forecasting around the globe using big data and a new supercomputer

The system is called GRAF, or Global High Resolution Atmospheric Forecasting, and will have many applications globally for governments and industries.

Скрыть анонс
07.11.2019
05:09 Arxiv.org CS Failure Analysis and Quantification for Contemporary and Future Supercomputers. (arXiv:1911.02118v1 [cs.DC])

Large-scale computing systems today are assembled by numerous computing units for massive computational capability needed to solve problems at scale, which enables failures common events in supercomputing scenarios. Considering the demanding resilience requirements of supercomputers today, we present a quantitative study on fine-grained failure modeling for contemporary and future large-scale computing systems. We integrate various types of failures from different system hierarchical levels and system components, and summarize the overall system failure rates formally. Given that nowadays system-wise failure rate needs to be capped under a threshold value for reliability and cost-efficiency purposes, we quantitatively discuss different scenarios of system resilience, and analyze the impacts of resilience to different error types on the variation of system failure rates, and the correlation of hierarchical failure rates. Moreover, we formalize and showcase the resilience efficiency of

Скрыть анонс
06.11.2019
06:23 Zdnet.com Defence awards AU$57m supercomputer contract to Hansen Yuncken

Construction of the supercomputer will begin by the end of the month.

Скрыть анонс
31.10.2019
05:13 Zdnet.com UQ's supercomputer makes AI training for digital pathology hundreds of times faster

The university wants to use AI to 'revolutionise' pathology laboratories across Australia.

Скрыть анонс
28.10.2019
17:03 ScienceNewsDaily.org Supercomputer analyzes web traffic across entire internet

Using a supercomputing system, MIT researchers have developed a model that captures what web traffic looks like around the world on a given day, which can be used as a measurement tool for internet ...

Скрыть анонс
25.10.2019
00:27 WhatReallyHappened.com Is official, Google claims they have a CPU chip that can calculate each SECOND the same data that for a supercomputer can take 50 YEARS. All passwords, bitcoin, current cryptographic codes can be defeated in minutes or days

Скрыть анонс
21.10.2019
14:08 Phys.org New supercomputer simulations explore magnetic reconnection and make a surprising discovery

Magnetic reconnection, a process in which magnetic field lines tear and come back together, releasing large amounts of kinetic energy, occurs throughout the universe. The process gives rise to auroras, solar flares and geomagnetic storms that can disrupt cell phone service and electric grids on Earth. A major challenge in the study of magnetic reconnection, however, is bridging the gap between these large-scale astrophysical scenarios and small-scale experiments that can be done in a lab.

Скрыть анонс
11.10.2019
14:05 Phys.org Summit supercomputer simulates how humans will 'brake' during Mars landing

The type of vehicle that will carry people to the Red Planet is shaping up to be "like a two-story house you're trying to land on another planet. The heat shield on the front of the vehicle is just over 16 meters in diameter, and the vehicle itself, during landing, weighs tens of metric tons. It's huge," said Ashley Korzun, a research aerospace engineer at NASA's Langley Research Center.

Скрыть анонс
08.10.2019
12:19 NewScientist.Com Supercomputer simulates 77,000 neurons in the brain in real-time

A brain-inspired computer can simulate part of the sensory cortex in real time, using tens of thousands of virtual neurons. It is the first time such a complex simulation has run this fast

Скрыть анонс
04.10.2019
09:54 Arxiv.org CS Running Alchemist on Cray XC and CS Series Supercomputers: Dask and PySpark Interfaces, Deployment Options, and Data Transfer Times. (arXiv:1910.01354v1 [cs.DC])

Newly developed interfaces for Python, Dask, and PySpark enable the use of Alchemist with additional data analysis frameworks. We also briefly discuss the combination of Alchemist with RLlib, an increasingly popular library for reinforcement learning, and consider the benefits of leveraging HPC simulations in reinforcement learning. Finally, since data transfer between the client applications and Alchemist are the main overhead Alchemist encounters, we give a qualitative assessment of these transfer times with respect to different~factors.

Скрыть анонс
01.10.2019
17:08 Zdnet.com UQ's new supercomputer is pushing the limits in analysing human skull models

The university is using its new Weiner supercomputer to push the limits of research, resulting in breakthroughs in areas such as Alzheimer's Disease.

Скрыть анонс
30.09.2019
13:09 AzoRobotics.com Lincoln Laboratory's New TX-GAIA Computing System Ranked as Most Powerful AI Supercomputer

The new TX-GAIA (Green AI Accelerator) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) has been ranked as the most powerful artificial intelligence supercomputer at any...

Скрыть анонс
27.09.2019
07:00 Arxiv.org CS Exascale Deep Learning to Accelerate Cancer Research. (arXiv:1909.12291v1 [cs.LG])

Deep learning, through the use of neural networks, has demonstrated remarkable ability to automate many routine tasks when presented with sufficient data for training. The neural network architecture (e.g. number of layers, types of layers, connections between layers, etc.) plays a critical role in determining what, if anything, the neural network is able to learn from the training data. The trend for neural network architectures, especially those trained on ImageNet, has been to grow ever deeper and more complex. The result has been ever increasing accuracy on benchmark datasets with the cost of increased computational demands. In this paper we demonstrate that neural network architectures can be automatically generated, tailored for a specific application, with dual objectives: accuracy of prediction and speed of prediction. Using MENNDL--an HPC-enabled software stack for neural architecture search--we generate a neural network with comparable accuracy to state-of-the-art networks on

Скрыть анонс
06:48 Arxiv.org Statistics Exascale Deep Learning to Accelerate Cancer Research. (arXiv:1909.12291v1 [cs.LG])

Deep learning, through the use of neural networks, has demonstrated remarkable ability to automate many routine tasks when presented with sufficient data for training. The neural network architecture (e.g. number of layers, types of layers, connections between layers, etc.) plays a critical role in determining what, if anything, the neural network is able to learn from the training data. The trend for neural network architectures, especially those trained on ImageNet, has been to grow ever deeper and more complex. The result has been ever increasing accuracy on benchmark datasets with the cost of increased computational demands. In this paper we demonstrate that neural network architectures can be automatically generated, tailored for a specific application, with dual objectives: accuracy of prediction and speed of prediction. Using MENNDL--an HPC-enabled software stack for neural architecture search--we generate a neural network with comparable accuracy to state-of-the-art networks on

Скрыть анонс
26.09.2019
09:48 Arxiv.org Physics Exascale Deep Learning for Scientific Inverse Problems. (arXiv:1909.11150v1 [cs.LG])

We introduce novel communication strategies in synchronous distributed Deep Learning consisting of decentralized gradient reduction orchestration and computational graph-aware grouping of gradient tensors. These new techniques produce an optimal overlap between computation and communication and result in near-linear scaling (0.93) of distributed training up to 27,600 NVIDIA V100 GPUs on the Summit Supercomputer. We demonstrate our gradient reduction techniques in the context of training a Fully Convolutional Neural Network to approximate the solution of a longstanding scientific inverse problem in materials imaging. The efficient distributed training on a dataset size of 0.5 PB, produces a model capable of an atomically-accurate reconstruction of materials, and in the process reaching a peak performance of 2.15(4) EFLOPS$_{16}$.

Скрыть анонс
09:48 Arxiv.org CS Exascale Deep Learning for Scientific Inverse Problems. (arXiv:1909.11150v1 [cs.LG])

We introduce novel communication strategies in synchronous distributed Deep Learning consisting of decentralized gradient reduction orchestration and computational graph-aware grouping of gradient tensors. These new techniques produce an optimal overlap between computation and communication and result in near-linear scaling (0.93) of distributed training up to 27,600 NVIDIA V100 GPUs on the Summit Supercomputer. We demonstrate our gradient reduction techniques in the context of training a Fully Convolutional Neural Network to approximate the solution of a longstanding scientific inverse problem in materials imaging. The efficient distributed training on a dataset size of 0.5 PB, produces a model capable of an atomically-accurate reconstruction of materials, and in the process reaching a peak performance of 2.15(4) EFLOPS$_{16}$.

Скрыть анонс
09:36 Arxiv.org Statistics Exascale Deep Learning for Scientific Inverse Problems. (arXiv:1909.11150v1 [cs.LG])

We introduce novel communication strategies in synchronous distributed Deep Learning consisting of decentralized gradient reduction orchestration and computational graph-aware grouping of gradient tensors. These new techniques produce an optimal overlap between computation and communication and result in near-linear scaling (0.93) of distributed training up to 27,600 NVIDIA V100 GPUs on the Summit Supercomputer. We demonstrate our gradient reduction techniques in the context of training a Fully Convolutional Neural Network to approximate the solution of a longstanding scientific inverse problem in materials imaging. The efficient distributed training on a dataset size of 0.5 PB, produces a model capable of an atomically-accurate reconstruction of materials, and in the process reaching a peak performance of 2.15(4) EFLOPS$_{16}$.

Скрыть анонс
18.09.2019
15:57 Zdnet.com Oracle: This 1,060 Raspberry Pi supercomputer is 'world's largest Pi cluster'

Oracle's Raspberry Pi 3 B+ supercomputer should provide 4,240 cores for processing.

Скрыть анонс
04.09.2019
05:16 Arxiv.org CS Improving the Effective Utilization of Supercomputer Resources by Adding Low-Priority Containerized Jobs. (arXiv:1909.00394v1 [cs.DC])

We propose an approach to utilize idle computational resources of supercomputers. The idea is to maintain an additional queue of low-priority non-parallel jobs and execute them in containers, using container migration tools to break the execution down into separate intervals. We propose a container management system that can maintain this queue and interact with the supercomputer scheduler. We conducted a series of experiments simulating supercomputer scheduler and the proposed system. The experiments demonstrate that the proposed system increases the effective utilization of supercomputer resources under most of the conditions, in some cases significantly improving the performance.

Скрыть анонс
19.08.2019
05:58 Arxiv.org Statistics Multitask and Transfer Learning for Autotuning Exascale Applications. (arXiv:1908.05792v1 [cs.LG])

Multitask learning and transfer learning have proven to be useful in the field of machine learning when additional knowledge is available to help a prediction task. We aim at deriving methods following these paradigms for use in autotuning, where the goal is to find the optimal performance parameters of an application treated as a black-box function. We show comparative results with state-of-the-art autotuning techniques. For instance, we observe an average $1.5x$ improvement of the application runtime compared to the OpenTuner and HpBandSter autotuners. We explain how our approaches can be more suitable than some state-of-the-art autotuners for the tuning of any application in general and of expensive exascale applications in particular.

Скрыть анонс
05:58 Arxiv.org CS Multitask and Transfer Learning for Autotuning Exascale Applications. (arXiv:1908.05792v1 [cs.LG])

Multitask learning and transfer learning have proven to be useful in the field of machine learning when additional knowledge is available to help a prediction task. We aim at deriving methods following these paradigms for use in autotuning, where the goal is to find the optimal performance parameters of an application treated as a black-box function. We show comparative results with state-of-the-art autotuning techniques. For instance, we observe an average $1.5x$ improvement of the application runtime compared to the OpenTuner and HpBandSter autotuners. We explain how our approaches can be more suitable than some state-of-the-art autotuners for the tuning of any application in general and of expensive exascale applications in particular.

Скрыть анонс
13.08.2019
16:36 Zdnet.com Cray lands $600 million contract from DOE to build El Capitan exascale supercomputer

This is the third US exascale win for Cray, which was recently acquired by Hewlett Packard Enterprise​ for $1.3 billion.

Скрыть анонс
01.08.2019
18:43 Phys.org Is your supercomputer stumped? There may be a quantum solution

Some math problems are so complicated that they can bog down even the world's most powerful supercomputers. But a wild new frontier in computing that applies the rules of the quantum realm offers a different approach.

Скрыть анонс
18:38 ScienceDaily.com Is your supercomputer stumped? There may be a quantum solution

A new study details how a quantum computing technique called 'quantum annealing' can be used to solve problems relevant to fundamental questions in nuclear physics about the subatomic building blocks of all matter. It could also help answer other vexing questions in science and industry, too.

Скрыть анонс
04:32 Arxiv.org CS Deploying a Top-100 Supercomputer for Large Parallel Workloads: the Niagara Supercomputer. (arXiv:1907.13600v1 [cs.DC])

Niagara is currently the fastest supercomputer accessible to academics in Canada. It was deployed at the beginning of 2018 and has been serving the research community ever since. This homogeneous 60,000-core cluster, owned by the University of Toronto and operated by SciNet, was intended to enable large parallel jobs and has a measured performance of 3.02 petaflops, debuting at #53 in the June 2018 TOP500 list. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity. It replaced two systems that SciNet operated for over 8 years, the Tightly Coupled System (TCS) and the General Purpose Cluster (GPC). In this paper we describe the transition process from these two systems, the procurement and deployment processes, as well as the unique features that make Niagara a one-of-a-kind machine in Canada.

Скрыть анонс
29.07.2019
05:20 Arxiv.org CS Massively Scaling Seismic Processing on Sunway TaihuLight Supercomputer. (arXiv:1907.11678v1 [cs.DC])

Common Midpoint (CMP) and Common Reflection Surface (CRS) are widely used methods for improving the signal-to-noise ratio in the field of seismic processing. These methods are computationally intensive and require high performance computing. This paper optimizes these methods on the Sunway many-core architecture and implements large-scale seismic processing on the Sunway Taihulight supercomputer. We propose the following three optimization techniques: 1) we propose a software cache method to reduce the overhead of memory accesses, and share data among CPEs via the register communication; 2) we re-design the semblance calculation procedure to further reduce the overhead of memory accesses; 3) we propose a vectorization method to improve the performance when processing the small volume of data within short loops. The experimental results show that our implementations of CMP and CRS methods on Sunway achieve 3.50x and 3.01x speedup on average compared to the-state-of-the-art

Скрыть анонс
02:47 Zdnet.com NCI boasts Australia's fastest supercomputer with AU$70m Gadi system

Touted as Australia's most powerful supercomputer.

Скрыть анонс