Techh.info/techtechnology hourly

Supercomputers

headlines texts
06.06.2022
18:50 NewScientist.Com Are the world's most powerful supercomputers operating in secret?

A supercomputer called Frontier has been officially crowned as the world's first exascale computer - one capable of a billion billion operations per second - but more powerful machines may be out there

Скрыть анонс
04.06.2022
02:50 ScienceDaily.com Great timing, supercomputer upgrade lead to successful forecast of volcanic eruption

In the fall of 2017, a team of geologists had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers. Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. The teams shared their insights and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.

Скрыть анонс
03.06.2022
21:03 Phys.org Great timing and supercomputer upgrade lead to successful forecast of volcanic eruption

In the fall of 2017, geology professor Patricia Gregg and her team had just set up a new volcanic forecasting modeling program on the Blue Waters and iForge supercomputers. Simultaneously, another team was monitoring activity at the Sierra Negra volcano in the Galapagos Islands, Ecuador. One of the scientists on the Ecuador project, Dennis Geist of Colgate University, contacted Gregg, and what happened next was the fortuitous forecast of the June 2018 Sierra Negra eruption five months before it occurred.

Скрыть анонс
02.06.2022
21:32 ScienceMag.org News at a glance: China’s carbon pledge, ARPA-H’s interim head, and an exascale computer

The latest in science and policy

Скрыть анонс
05:22 Arxiv.org Physics Supercomputers against strong coupling in gravity with curvature and torsion. (arXiv:2206.00658v1 [gr-qc])

Many theories of gravity are spoiled by strongly coupled modes: the high computational cost of Hamiltonian analysis can obstruct the identification of these modes. A computer algebra implementation of the Hamiltonian constraint algorithm for curvature and torsion theories is presented. These non-Riemannian or Poincar\'e gauge theories suffer notoriously from strong coupling. The implementation forms a package (the `Hamiltonian Gauge Gravity Surveyor' - HiGGS) for the xAct tensor manipulation suite in Mathematica. Poisson brackets can be evaluated in parallel, meaning that Hamiltonian analysis can be done on silicon, and at scale. Accordingly HiGGS is designed to survey the whole Lagrangian space with high-performance computing resources (clusters and supercomputers). To demonstrate this, the space of `outlawed' Poincar\'e gauge theories is surveyed, in which a massive parity-even/odd vector or parity-odd tensor torsion particle accompanies the usual graviton. The survey spans possible

Скрыть анонс
05:22 Arxiv.org CS Modeling pre-Exascale AMR Parallel I/O Workloads via Proxy Applications. (arXiv:2206.00108v1 [cs.DC])

The present work investigates the modeling of pre-exascale input/output (I/O) workloads of Adaptive Mesh Refinement (AMR) simulations through a simple proxy application. We collect data from the AMReX Castro framework running on the Summit supercomputer for a wide range of scales and mesh partitions for the hydrodynamic Sedov case as a baseline to provide sufficient coverage to the formulated proxy model. The non-linear analysis data production rates are quantified as a function of a set of input parameters such as output frequency, grid size, number of levels, and the Courant-Friedrichs-Lewy (CFL) condition number for each rank, mesh level and simulation time step. Linear regression is then applied to formulate a simple analytical model which allows to translate AMReX inputs into MACSio proxy I/O application parameters, resulting in a simple "kernel" approximation for data production at each time step. Results show that MACSio can simulate actual AMReX non-linear "static" I/O

Скрыть анонс
01.06.2022
18:33 ExtremeTech.com AMD-Powered Supercomputer is The First to Break The Exascale Barrier

Insert obligatory "Can it run Crysis?" joke here. The post AMD-Powered Supercomputer is The First to Break The Exascale Barrier appeared first on ExtremeTech.

Скрыть анонс
31.05.2022
14:00 NewScientist.Com World's first exascale supercomputer Frontier smashes speed records

Frontier, a supercomputer built by Oak Ridge National Laboratory, is the first capable of an exaflop - a billion billion operations per second

Скрыть анонс
10:23 Arxiv.org CS CP2K on the road to exascale. (arXiv:2205.14741v1 [cond-mat.mtrl-sci])

The CP2K program package, which can be considered as the swiss army knife of atomistic simulations, is presented with a special emphasis on ab-initio molecular dynamics using the second-generation Car-Parrinello method. After outlining current and near-term development efforts with regards to massively parallel low-scaling post-Hartree-Fock and eigenvalue solvers, novel approaches on how we plan to take full advantage of future low-precision hardware architectures are introduced. Our focus here is on combining our submatrix method with the approximate computing paradigm to address the immanent exascale era.

Скрыть анонс
07:13 Technology.org Frontier supercomputer debuts as world’s fastest, breaking exascale barrier

The Frontier supercomputer at the Department of Energy’s Oak Ridge National Laboratory earned the top ranking today as the world’s

Скрыть анонс
05:42 Zdnet.com US knocks out Japan to take the supercomputer Top500 crown

The Frontier system has been declared as the "first true exascale machine" after it surpassed the HPL score of one exaflop.

Скрыть анонс
30.05.2022
18:32 NYT Technology U.S. Retakes Top Spot in Supercomputer Race

A massive machine in Tennessee has been deemed the world’s speediest. Experts say two supercomputers in China may be faster, but the country didn’t participate in the rankings.

Скрыть анонс
10:20 SingularityHub.Com Age of Exascale: Wickedly Fast Frontier Supercomputer Ushers in the Next Era of Computing

Today, Oak Ridge National Laboratory’s Frontier supercomputer was crowned fastest on the planet in the semiannual Top500 list. Frontier more than doubled the speed of the last titleholder, Japan’s Fugaku supercomputer, and is the first to officially clock speeds over a quintillion calculations a second—a milestone computing has pursued for 14 years. That’s a big […]

Скрыть анонс
25.05.2022
23:10 ScienceDaily.com Physicist uses intuition, supercomputers to identify new high-temperature superconductor

In 2021, scientists discovered a new form of superconducting nickelate using computational methods. The discovery lets researchers explore similarities and differences between nickelates and cuprates -- promising copper-based materials -- and among nickelates. Both families of materials appear to display 'super-exchange,' where the material trades electrons in copper or nickel atoms through a pathway that contains oxygen, rather than directly. This, researchers believe, may be one of the factors that governs superconductivity.

Скрыть анонс
05:06 Arxiv.org Quantitative Biology Breaking the Exascale Barrier for the Electronic Structure Problem in Ab-Initio Molecular Dynamics. (arXiv:2205.12182v1 [physics.comp-ph])

The non-orthogonal local submatrix method applied to electronic-structure based molecular dynamics simulations is shown to exceed 1.1 EFLOP/s in FP16/FP32 mixed floating-point arithmetic when using 4,400 NVIDIA A100 GPUs of the Perlmutter system. This is enabled by a modification of the original method that pushes the sustained fraction of the peak performance to about 80%. Example calculations are performed for SARS-CoV-2 spike proteins with up to 83 million atoms.

Скрыть анонс
05:06 Arxiv.org Physics Breaking the Exascale Barrier for the Electronic Structure Problem in Ab-Initio Molecular Dynamics. (arXiv:2205.12182v1 [physics.comp-ph])

The non-orthogonal local submatrix method applied to electronic-structure based molecular dynamics simulations is shown to exceed 1.1 EFLOP/s in FP16/FP32 mixed floating-point arithmetic when using 4,400 NVIDIA A100 GPUs of the Perlmutter system. This is enabled by a modification of the original method that pushes the sustained fraction of the peak performance to about 80%. Example calculations are performed for SARS-CoV-2 spike proteins with up to 83 million atoms.

Скрыть анонс
24.05.2022
08:02 Arxiv.org Math ALPINE: A set of performance portable plasma physics particle-in-cell mini-apps for exascale computing. (arXiv:2205.11052v1 [physics.comp-ph])

Alpine consists of a set of mini-apps that makes use of exascale computing capabilities to numerically solve some classical problems in plasma physics. It is based on IPPL (Independent Parallel Particle Layer), a framework that is designed around performance portable and dimension independent particles and fields. In this work, IPPL is used to implement a particle-in-cell scheme. The article describes in detail the following mini-apps: weak and strong Landau damping, bump-on-tail and two-stream instabilities, and the dynamics of an electron bunch in a charge-neutral Penning trap. We benchmark the simulations with varying parameters such as grid resolutions ($512^3$ to $2048^3$) and number of simulation particles ($10^9$ to $10^{11}$). We show strong and weak scaling and analyze the performance of different components on several pre-exascale architectures such as Piz-Daint, Cori, Summit and Perlmutter.

Скрыть анонс
08:02 Arxiv.org Physics ALPINE: A set of performance portable plasma physics particle-in-cell mini-apps for exascale computing. (arXiv:2205.11052v1 [physics.comp-ph])

Alpine consists of a set of mini-apps that makes use of exascale computing capabilities to numerically solve some classical problems in plasma physics. It is based on IPPL (Independent Parallel Particle Layer), a framework that is designed around performance portable and dimension independent particles and fields. In this work, IPPL is used to implement a particle-in-cell scheme. The article describes in detail the following mini-apps: weak and strong Landau damping, bump-on-tail and two-stream instabilities, and the dynamics of an electron bunch in a charge-neutral Penning trap. We benchmark the simulations with varying parameters such as grid resolutions ($512^3$ to $2048^3$) and number of simulation particles ($10^9$ to $10^{11}$). We show strong and weak scaling and analyze the performance of different components on several pre-exascale architectures such as Piz-Daint, Cori, Summit and Perlmutter.

Скрыть анонс
20.05.2022
14:03 Technology.org Voyager Supercomputer Enters Testbed Phase

Voyager, the experimental compute resource newly installed at the San Diego Supercomputer Center (SDSC), is ready for use.

Скрыть анонс
18.05.2022
09:25 Arxiv.org CS Accelerating X-Ray Tracing for Exascale Systems using Kokkos. (arXiv:2205.07976v1 [cs.DC])

The upcoming exascale computing systems Frontier and Aurora will draw much of their computing power from GPU accelerators. The hardware for these systems will be provided by AMD and Intel, respectively, each supporting their own GPU programming model. The challenge for applications that harness one of these exascale systems will be to avoid lock-in and to preserve performance portability. We report here on our results of using Kokkos to accelerate a real-world application on NERSC's Perlmutter Phase 1 (using NVIDIA A100 accelerators) and the testbed system for OLCF's Frontier (using AMD MI250X). By porting to Kokkos, we were able to successfully run the same X-ray tracing code on both systems and achieved speed-ups between 13% and 66% compared to the original CUDA code. These results are a highly encouraging demonstration of using Kokkos to accelerate production science code.

Скрыть анонс
16.05.2022
07:43 Arxiv.org CS GROMACS in the cloud: A global supercomputer to speed up alchemical drug design. (arXiv:2201.06372v2 [cs.DC] UPDATED)

We assess costs and efficiency of state-of-the-art high performance cloud computing compared to a traditional on-premises compute cluster. Our use case are atomistic simulations carried out with the GROMACS molecular dynamics (MD) toolkit with a focus on alchemical protein-ligand binding free energy calculations. We set up a compute cluster in the Amazon Web Services (AWS) cloud that incorporates various different instances with Intel, AMD, and ARM CPUs, some with GPU acceleration. Using representative biomolecular simulation systems we benchmark how GROMACS performs on individual instances and across multiple instances. Thereby we assess which instances deliver the highest performance and which are the most cost-efficient ones for our use case. We find that, in terms of total costs including hardware, personnel, room, energy and cooling, producing MD trajectories in the cloud can be as cost-efficient as an on-premises cluster given that optimal cloud instances are chosen.

Скрыть анонс
06.05.2022
19:12 Phys.org Supercomputer simulations reveal the details of coronavirus fusion

The mystery of exactly how the SARS-CoV-2 virus infects human lung cells remains largely hidden to experimental scientists. Now, however, the devilish details of the mechanism by which the coronavirus fuses to host cells has been suggested through simulations by University of Chicago researchers using the Frontera supercomputer at the Texas Advanced Computing Center (TACC).

Скрыть анонс
06:13 Arxiv.org CS Three-body problem -- from Newton to supercomputer plus machine learning. (arXiv:2106.11010v2 [cs.OH] UPDATED)

The famous three-body problem can be traced back to Newton in 1687, but quite few families of periodic orbits were found in 300 years thereafter. In this paper, we propose an effective approach and roadmap to numerically gain planar periodic orbits of three-body systems with arbitrary masses by means of machine learning based on an artificial neural network (ANN) model. Given any a known periodic orbit as a starting point, this approach can provide more and more periodic orbits (of the same family name) with variable masses, while the mass domain having periodic orbits becomes larger and larger, and the ANN model becomes wiser and wiser. Finally we have an ANN model trained by means of all obtained periodic orbits of the same family, which provides a convenient way to give accurate enough predictions of periodic orbits with arbitrary masses for physicists and astronomers. It suggests that the high-performance computer and artificial intelligence (including machine learning) should be

Скрыть анонс
03.05.2022
07:02 Arxiv.org CS Lifetime-based Method for Quantum Simulation on a New Sunway Supercomputer. (arXiv:2205.00393v1 [cs.DC])

Faster classical simulation becomes essential for the validation of quantum computer, and tensor network contraction is a widely-applied simulation approach. Due to the memory limitation, slicing is adopted to help cutting down the memory size by reducing the tensor dimension, which also leads to additional computation overhead. This paper proposes novel lifetime-based methods to reduce the slicing overhead and improve the computing efficiency, including: interpretation for slicing overhead, an in place slicing strategy to find the smallest slicing set, a corresponding iterative method, and an adaptive path refiner customized for Sunway architecture. Experiments show that our in place slicing strategy reduces the slicing overhead to less than 1.2 and obtains 100-200 times speedups over related efforts. The resulting simulation time is reduced from 304s (2021 Gordon Bell Prize) to 149.2s on Sycamore RQC, with a sustainable mixed-precision performance of 416.5 Pflops using over 41M cores

Скрыть анонс
25.04.2022
17:03 Technology.org Supercomputer Center Replaces Lead-Acid Backup Batteries with Green Alternative

Urban Electric Power announced this week that its rechargeable alkaline battery technology had been installed at the San

Скрыть анонс
30.03.2022
13:11 NewYork Times Turing Award Won by Programmer Who Paved Way for Supercomputers

In the 1970s, Jack Dongarra created code and concepts that allowed software to work easily with the world’s most powerful computing machines.

Скрыть анонс
12:33 Zdnet.com Jack Dongarra, who made supercomputers usable, awarded 2021 ACM Turing prize

He made supercomputing usable with programs such as LINPACK and BLAS and laid the groundwork for the democratization of supercomputing in the cloud.

Скрыть анонс
12:12 NYT Technology Turing Award Won by Programmer Who Paved Way for Supercomputers

In the 1970s, Jack Dongarra created code and concepts that allowed software to work easily with the world’s most powerful computing machines.

Скрыть анонс
28.03.2022
23:25 Zdnet.com The Dept. of Energy's mini supercomputer packs a massive computing punch

The "Crusher," a 1.5-cabinet iteration of the soon-to-launch Frontier supercomputer, is speeding up some pivotal projects.

Скрыть анонс
25.03.2022
14:07 Technology.org Hawaiian-Emperor Undersea Mystery Revealed with Supercomputers

The Hawaiian-Emperor seamount chain spans almost four thousand miles from the Hawaiian Islands to the Detroit Seamount in the North

Скрыть анонс
22.03.2022
21:22 ScienceDaily.com Hawaiian-Emperor undersea mystery revealed with supercomputers

Kinematic plate reconstructions and high-resolution global dynamic models developed to quantify the amount of Pacific Plate motion change associated with the Hawaiian -- Emperor Bend. Scientists are hopeful this basic research into Pacific Plate motion can be applied to other associated phenomena such as large earthquakes.

Скрыть анонс
10:46 Phys.org Hawaiian-Emperor undersea mystery revealed with supercomputers

The Hawaiian-Emperor seamount chain spans almost four thousand miles from the Hawaiian Islands to the Detroit Seamount in the north Pacific, an L- shaped chain that goes west then abruptly north. The 60-degree bend in the line of mostly undersea mountains and volcanic islands has puzzled scientists since it was first identified in the 1940s from the data of numerous echo sounding ships.

Скрыть анонс
05:25 Arxiv.org Math Exascale Grid Optimization (ExaGO) toolkit: An open-source high-performance package for solving large-scale grid optimization problems. (arXiv:2203.10587v1 [eess.SY])

This paper introduces the Exascale Grid Optimization (ExaGO) toolkit, a library for solving large-scale alternating current optimal power flow (ACOPF) problems including stochastic effects, security constraints and multi-period constraints. ExaGO can run on parallel distributed memory platforms, including massively parallel hardware accelerators such as graphical processing units (GPUs). We present the details of the ExaGO library including its architecture, formulations, modeling details, and its performance for several optimization applications.

Скрыть анонс
05:25 Arxiv.org CS Exascale Grid Optimization (ExaGO) toolkit: An open-source high-performance package for solving large-scale grid optimization problems. (arXiv:2203.10587v1 [eess.SY])

This paper introduces the Exascale Grid Optimization (ExaGO) toolkit, a library for solving large-scale alternating current optimal power flow (ACOPF) problems including stochastic effects, security constraints and multi-period constraints. ExaGO can run on parallel distributed memory platforms, including massively parallel hardware accelerators such as graphical processing units (GPUs). We present the details of the ExaGO library including its architecture, formulations, modeling details, and its performance for several optimization applications.

Скрыть анонс
15.03.2022
14:28 Technology.org Monitoring Arctic permafrost with satellites, supercomputers and deep learning

Permafrost — ground that has been permanently frozen for two or more years — makes up a large

Скрыть анонс
08.03.2022
23:20 ScienceDaily.com Toward ever-more powerful microchips and supercomputers

A look at the process to extend 'Moore's law,' which has doubled the number of transistors that can be packed on a microchip roughly every two years, and develop new ways to produce more capable, efficient, and cost-effective chips.

Скрыть анонс
19:05 Phys.org Plasma lab findings could lead to ever-more powerful microchips and supercomputers

The information age created over nearly 60 years has given the world the internet, smart phones and lightning-fast computers. Making this possible has been the doubling of the number of transistors that can be packed onto a computer chip roughly every two years, giving rise to billions of atomic-scale transistors that now fit on a fingernail-sized chip. Such "atomic scale" lengths are so tiny that individual atoms can be seen and counted in them.

Скрыть анонс
02.03.2022
08:59 Arxiv.org Statistics Making use of supercomputers in financial machine learning. (arXiv:2203.00427v1 [cs.DC])

This article is the result of a collaboration between Fujitsu and Advestis. This collaboration aims at refactoring and running an algorithm based on systematic exploration producing investment recommendations on a high-performance computer of the Fugaku, to see whether a very high number of cores could allow for a deeper exploration of the data compared to a cloud machine, hopefully resulting in better predictions. We found that an increase in the number of explored rules results in a net increase in the predictive performance of the final ruleset. Also, in the particular case of this study, we found that using more than around 40 cores does not bring a significant computation time gain. However, the origin of this limitation is explained by a threshold-based search heuristic used to prune the search space. We have evidence that for similar data sets with less restrictive thresholds, the number of cores actually used could very well be much higher, allowing parallelization to have a

Скрыть анонс
08:59 Arxiv.org Quantitative Finance Making use of supercomputers in financial machine learning. (arXiv:2203.00427v1 [cs.DC])

This article is the result of a collaboration between Fujitsu and Advestis. This collaboration aims at refactoring and running an algorithm based on systematic exploration producing investment recommendations on a high-performance computer of the Fugaku, to see whether a very high number of cores could allow for a deeper exploration of the data compared to a cloud machine, hopefully resulting in better predictions. We found that an increase in the number of explored rules results in a net increase in the predictive performance of the final ruleset. Also, in the particular case of this study, we found that using more than around 40 cores does not bring a significant computation time gain. However, the origin of this limitation is explained by a threshold-based search heuristic used to prune the search space. We have evidence that for similar data sets with less restrictive thresholds, the number of cores actually used could very well be much higher, allowing parallelization to have a

Скрыть анонс
08:59 Arxiv.org CS Making use of supercomputers in financial machine learning. (arXiv:2203.00427v1 [cs.DC])

This article is the result of a collaboration between Fujitsu and Advestis. This collaboration aims at refactoring and running an algorithm based on systematic exploration producing investment recommendations on a high-performance computer of the Fugaku, to see whether a very high number of cores could allow for a deeper exploration of the data compared to a cloud machine, hopefully resulting in better predictions. We found that an increase in the number of explored rules results in a net increase in the predictive performance of the final ruleset. Also, in the particular case of this study, we found that using more than around 40 cores does not bring a significant computation time gain. However, the origin of this limitation is explained by a threshold-based search heuristic used to prune the search space. We have evidence that for similar data sets with less restrictive thresholds, the number of cores actually used could very well be much higher, allowing parallelization to have a

Скрыть анонс
23.02.2022
17:51 ScienceDaily.com Monitoring Arctic permafrost with satellites, supercomputers, and deep learning

Using deep learning and supercomputers, researchers have been able to identify and map 1.2 billion ice wedge polygons in the Arctic permafrost based on satellite imagery. The data helps establish a baseline from which to detect changes to the region. The researchers trained a deep learning system to identify Arctic features and TACC's Longhorn supercomputer to analyze the data. The ice wedge data will be available for rapid analysis on the new Permafrost Discovery Gateway.

Скрыть анонс
22.02.2022
23:38 Phys.org Monitoring Arctic permafrost with satellites, supercomputers, and deep learning

Permafrost—ground that has been permanently frozen for two or more years—makes up a large part of the Earth, around 15% of the Northern Hemisphere.

Скрыть анонс
07:29 Arxiv.org CS Distributed Out-of-Memory NMF of Dense and Sparse Data on CPU/GPU Architectures with Automatic Model Selection for Exascale Data. (arXiv:2202.09518v1 [cs.DC])

The need for efficient and scalable big-data analytics methods is more essential than ever due to the exploding size and complexity of globally emerging datasets. Nonnegative Matrix Factorization (NMF) is a well-known explainable unsupervised learning method for dimensionality reduction, latent feature extraction, blind source separation, data mining, and machine learning. In this paper, we introduce a new distributed out-of-memory NMF method, named pyDNMF-GPU, designed for modern heterogeneous CPU/GPU architectures that is capable of factoring exascale-sized dense and sparse matrices. Our method reduces the latency associated with local data transfer between the GPU and host using CUDA streams, and reduces the latency associated with collective communications (both intra-node and inter-node) via NCCL primitives. In addition, sparse and dense matrix multiplications are significantly accelerated with GPU cores, resulting in good scalability. We set new benchmarks for the size of the

Скрыть анонс
07:29 Arxiv.org CS Distributed non-negative RESCAL with Automatic Model Selection for Exascale Data. (arXiv:2202.09512v1 [cs.DC])

With the boom in the development of computer hardware and software, social media, IoT platforms, and communications, there has been an exponential growth in the volume of data produced around the world. Among these data, relational datasets are growing in popularity as they provide unique insights regarding the evolution of communities and their interactions. Relational datasets are naturally non-negative, sparse, and extra-large. Relational data usually contain triples, (subject, relation, object), and are represented as graphs/multigraphs, called knowledge graphs, which need to be embedded into a low-dimensional dense vector space. Among various embedding models, RESCAL allows learning of relational data to extract the posterior distributions over the latent variables and to make predictions of missing relations. However, RESCAL is computationally demanding and requires a fast and distributed implementation to analyze extra-large real-world datasets. Here we introduce a distributed

Скрыть анонс
21.02.2022
08:54 Arxiv.org Physics Magnetic reconnection in the era of exascale computing and multiscale experiments. (arXiv:2202.09004v1 [physics.plasm-ph])

Astrophysical plasmas have the remarkable ability to preserve magnetic topology, which inevitably gives rise to the accumulation of magnetic energy within stressed regions including current sheets. This stored energy is often released explosively through the process of magnetic reconnection, which produces a reconfiguration of the magnetic field, along with high-speed flows, thermal heating, and nonthermal particle acceleration. Either collisional or kinetic dissipation mechanisms are required to overcome the topological constraints, both of which have been predicted by theory and validated with in situ spacecraft observations or laboratory experiments. However, major challenges remain in understanding magnetic reconnection in large systems, such as the solar corona, where the collisionality is weak and the kinetic scales are vanishingly small in comparison to macroscopic scales. The plasmoid instability or formation of multiple plasmoids in long reconnecting current sheets is one

Скрыть анонс
17.02.2022
15:35 Zdnet.com Europe's fastest computer? Atos unveils BullSequana XH300 'exascale-class' supercomputer

France's Atos unveils the BullSequana XH300, a new supercomputer that could become Europe's fastest, heralding exascale supercomputing.

Скрыть анонс
15.02.2022
20:28 Phys.org Researchers use supercomputers for largest-ever turbulence simulations

Despite being among the topics most researched on supercomputers, a fundamental understanding of the effects of turbulent motion on fluid flows still eludes scientists. A new approach developed at TU Darmstadt and running at the Leibniz Supercomputing Centre aims to change that.

Скрыть анонс
05:20 ScienceDaily.com Researchers use supercomputers for largest-ever turbulence simulations of its kind

Despite being among the most researched topics on supercomputers, a fundamental understanding of the effects of turbulent motion on fluid flows still eludes scientists. A new approach aims to change that.

Скрыть анонс
04.02.2022
19:06 Phys.org Supercomputer and quantum simulations solve a difficult problem of materials science

Understanding the structural properties of molecules found in nature or synthesized in the laboratory has always been the bread and butter of materials scientists. But, with advancements in science and technology, the endeavor has become even more ambitious: discovering new materials with highly desirable properties. To accomplish such a feat systematically, materials scientists rely upon sophisticated simulation techniques that incorporate the rules of quantum mechanics, the same rules which govern the molecules themselves.

Скрыть анонс
05:12 Arxiv.org Physics Tensor Processing Units as Quantum Chemistry Supercomputers. (arXiv:2202.01255v1 [physics.comp-ph])

We demonstrate the use of Google's Tensor Processing Units (TPUs) to both accelerate and scale up density functional theory (DFT) calculations of electronic structure. Utilizing 512 TPU v3 cores, we perform the largest $O(N^3)$ DFT computation to date, with $N = 247\,848$ orbitals, corresponding to a cluster of over $10\,000$ water molecules with more than $100\,000$ electrons. A full TPU v3 pod (2048 TPU v3 cores) and a TPU v4 pod (8192 TPU v4 cores) are projected to handle up to $N \approx 500\,000$ and $N \approx 1\,000\,000$ orbitals respectively. Lower-scaling (e.g. linear-scaling) variants of DFT can consider even larger numbers of orbitals, although they often only work for restricted classes of systems, such as insulating systems, require additional approximations and incur increased code complexity. As a result, when computationally affordable, cubic-scaling DFT as considered here is preferable due to its algorithmic simplicity and more general applicability. Our work thus

Скрыть анонс
02.02.2022
14:33 Technology.org New supercomputer to herald next generation of discoveries

One of Australia’s most powerful supercomputers is getting a major upgrade, with Swinburne University of Technology designing and

Скрыть анонс
03:26 Zdnet.com Swinburne to receive AU$18.5 million supercomputer upgrade

The new facility will replace the university's existing OzStar machine that has been operational since 2017.

Скрыть анонс
01.02.2022
00:09 Phys.org The universe is in much sharper focus with new algorithms and supercomputers

With new algorithms and supercomputers, an incredibly detailed radio map of the universe has been created. Now astronomers can look at radio data of galaxies with much more precision. This research was published in Nature Astronomy by Leiden University Ph.D. student Frits Sweijen and colleagues.

Скрыть анонс
27.01.2022
18:55 ExtremeTech.com Meta is Building a Massive New Supercomputer

It'll be used for real-time speech recognition, neuro-linguistic programming . . . and the metaverse, obviously. The post Meta is Building a Massive New Supercomputer appeared first on ExtremeTech.

Скрыть анонс
00:39 SingularityHub.Com Meta Is Making a Monster AI Supercomputer for the Metaverse

Meta is building a new supercomputer to train enormous machine learning algorithms. Though only partially complete, the AI Research Supercluster (RSC) already ranks among the most powerful machines on the planet. When it’s finished, the company formerly known as Facebook says it will be the fastest AI supercomputer anywhere. Meta hopes RSC can improve their […]

Скрыть анонс
26.01.2022
07:47 Arxiv.org Quantitative Biology Proteome-scale Deployment of Protein Structure Prediction Workflows on the Summit Supercomputer. (arXiv:2201.10024v1 [q-bio.QM])

Deep learning has contributed to major advances in the prediction of protein structure from sequence, a fundamental problem in structural bioinformatics. With predictions now approaching the accuracy of crystallographic resolution in some cases, and with accelerators like GPUs and TPUs making inference using large models rapid, fast genome-level structure prediction becomes an obvious aim. Leadership-class computing resources can be used to perform genome-scale protein structure prediction using state-of-the-art deep learning models, providing a wealth of new data for systems biology applications. Here we describe our efforts to efficiently deploy the AlphaFold2 program, for full-proteome structure prediction, at scale on the Oak Ridge Leadership Computing Facility's resources, including the Summit supercomputer. We performed inference to produce the predicted structures for 35,634 protein sequences, corresponding to three prokaryotic proteomes and one plant proteome, using under 4,000

Скрыть анонс
25.01.2022
15:06 WhatReallyHappened.com Facebook's New 'Cutting-Edge AI Supercomputer' Will Be Used to Censor 'Hate Speech' And 'Misinformation'

Whereas American business titans of the past developed revolutionary new forms of transportation, communication, production, medical treatments and sanitation, Mark Zuckerberg is using his billions to develop an AI supercomputer to censor lawful speech and silence dissent.

Скрыть анонс
10:48 CNN Meta is building an AI supercomputer

Facebook has long bet that artificial intelligence can help it with the difficult task of moderating posts from its billions of users. Now its parent company is taking a step that could move it closer to that elusive goal: building its first supercomputer.

Скрыть анонс
24.01.2022
22:00 RFI.fr Facebook trumpets massive new supercomputer

The US tech giant said the array of machines could process images and video up to 20 times faster than their current systems. The supercomputer, built from thousands of processors, will be used to "seamlessly analyse text, images, and video together; develop new augmented reality tools; and much more", the firm said in blog post written by two of its Artificial Intelligence (AI) researchers. They envisage developing AI tools that will, among other things, allow people speaking in several different languages to understand each other in real-time. Meta said the machine, known as AI Research SuperCluster (RSC), was already in the top five fastest supercomputers and would become the fastest AI machine in the world when fully built in the next few months. Platforms like Facebook and Google have long been criticised for the way they process and utilise the data they take from their users. The two firms currently face legal cases across the European Union that allege data transfers from the

Скрыть анонс
20:14 NewScientist.Com Meta is building the world's largest AI-specific supercomputer

Facebook’s owner wants extraordinary computing power to develop AI models to recognise speech, translate languages and power 3D worlds

Скрыть анонс
20:06 Zdnet.com Meta says it will soon have the world's fastest AI supercomputer

Ultimately, the company formerly known as Facebook wants the AI Research SuperCluster system to help it develop AI to power the metaverse.

Скрыть анонс
08:46 Technology.org Updated exascale system for Earth simulations delivers twice the speed

A new version of the Energy Exascale Earth System Model, or E3SM, is two times faster than an earlier

Скрыть анонс
19.01.2022
12:06 Arxiv.org Quantitative Biology GROMACS in the cloud: A global supercomputer to speed up alchemical drug design. (arXiv:2201.06372v1 [cs.DC])

We assess costs and efficiency of state-of-the-art high performance cloud computing compared to a traditional on-premises compute cluster. Our use case are atomistic simulations carried out with the GROMACS molecular dynamics (MD) toolkit with a focus on alchemical protein-ligand binding free energy calculations. We set up a compute cluster in the Amazon Web Services (AWS) cloud that incorporates various different instances with Intel, AMD, and ARM CPUs, some with GPU acceleration. Using representative biomolecular simulation systems we benchmark how GROMACS performs on individual instances and across multiple instances. Thereby we assess which instances deliver the highest performance and which are the most cost-efficient ones for our use case. We find that, in terms of total costs including hardware, personnel, room, energy and cooling, producing MD trajectories in the cloud can be as cost-efficient as an on-premises cluster given that optimal cloud instances are chosen.

Скрыть анонс
06:03 Arxiv.org Physics GROMACS in the cloud: A global supercomputer to speed up alchemical drug design. (arXiv:2201.06372v1 [cs.DC])

We assess costs and efficiency of state-of-the-art high performance cloud computing compared to a traditional on-premises compute cluster. Our use case are atomistic simulations carried out with the GROMACS molecular dynamics (MD) toolkit with a focus on alchemical protein-ligand binding free energy calculations. We set up a compute cluster in the Amazon Web Services (AWS) cloud that incorporates various different instances with Intel, AMD, and ARM CPUs, some with GPU acceleration. Using representative biomolecular simulation systems we benchmark how GROMACS performs on individual instances and across multiple instances. Thereby we assess which instances deliver the highest performance and which are the most cost-efficient ones for our use case. We find that, in terms of total costs including hardware, personnel, room, energy and cooling, producing MD trajectories in the cloud can be as cost-efficient as an on-premises cluster given that optimal cloud instances are chosen.

Скрыть анонс
18.01.2022
01:40 Medscape.Com Can Supercomputers Really Keep Up With the Human Brain?

Data scientists face new challenges as their efforts to examine every single neuron in the human brain place extreme processing demands on supercomputers.

Скрыть анонс
12.01.2022
19:24 NewScientist.Com UK’s most powerful supercomputer has booted up and is doing science

ARCHER2, a £79 million machine funded by the UK government, is still in a testing period, but already working on real science such as modelling volcanic plumes

Скрыть анонс
11.01.2022
17:46 Phys.org Scientists use Summit supercomputer, deep learning to predict protein functions at genome scale

A team of scientists led by the Department of Energy's Oak Ridge National Laboratory and the Georgia Institute of Technology is using supercomputing and revolutionary deep learning tools to predict the structures and roles of thousands of proteins with unknown functions.

Скрыть анонс
15:26 Technology.org Scientists use Summit supercomputer, deep learning to predict protein functions at genome scale

  A team of scientists led by the Department of Energy’s Oak Ridge National Laboratory and the Georgia

Скрыть анонс
10:08 Technology.org Citizen Science, Supercomputers and AI

Citizen scientists have helped researchers discover new types of galaxies, design drugs to fight COVID-19, and map the

Скрыть анонс
07.01.2022
19:22 ScienceDaily.com Light–matter interactions simulated on the world’s fastest supercomputer

Researchers have developed a computational approach for simulating interactions between matter and light at the atomic scale. The team tested their method by modeling light -- matter interactions in a thin film of amorphous silicon dioxide, composed of more than 10,000 atoms, using the world's fastest supercomputer, Fugaku. The proposed approach is highly efficient and could be used to study a wide range of phenomena in nanoscale optics and photonics.

Скрыть анонс
15:39 Phys.org Updated exascale system for Earth simulations

The Earth—with its myriad interactions of atmosphere, oceans, land and ice components—presents an extraordinarily complex system for investigation. For researchers, simulating the dynamics of these systems has presented a process that is just as complex. But today, Earth system models capable of weather-scale resolution take advantage of powerful new computers to simulate variations in Earth systems and anticipate decade-scale changes that will critically impact the U.S. energy sector in coming years.

Скрыть анонс
10:31 Nanowerk.com Light-matter interactions at the atomic scale simulated on the world's fastest supercomputer

Light-matter interactions form the basis of many important technologies, including lasers, light-emitting diodes (LEDs), and atomic clocks. However, usual computational approaches for modeling such interactions have limited usefulness and capability. Now, researchers have developed a technique that overcomes these limitations.

Скрыть анонс
09:21 Technology.org Updated exascale system for Earth simulations

A new version of the Energy Exascale Earth System Model (E3SM) is two times faster than the previous

Скрыть анонс
06.01.2022
14:25 Zdnet.com Now Europe wants its own super-powerful supercomputer

Europe is getting serious about supercomputing, with plans to build a high-end exascale device.

Скрыть анонс
16.12.2021
16:41 Phys.org Toward fusion energy, team models plasma turbulence on the nation's fastest supercomputer

A team modeled plasma turbulence on the nation's fastest supercomputer to better understand plasma behavior

Скрыть анонс
10.12.2021
21:04 Phys.org Exotic six-quark particle predicted by supercomputers

The predicted existence of an exotic particle made up of six elementary particles known as quarks by RIKEN researchers could deepen our understanding of how quarks combine to form the nuclei of atoms.

Скрыть анонс
07:03 Arxiv.org Physics Establishing a non-hydrostatic global atmospheric modeling system (iAMAS) at 3-km horizontal resolution with online integrated aerosol feedbacks on the Sunway supercomputer of China. (arXiv:2112.04668v1 [physics.ao-ph])

During the era of global warming and highly urbanized development, extreme and high impact weather as well as air pollution incidents influence everyday life and might even cause the incalculable loss of life and property. Although with the vast development of numerical simulation of atmosphere, there still exists substantial forecast biases objectively. To predict extreme weather, severe air pollution, and abrupt climate change accurately, the numerical atmospheric model requires not only to simulate meteorology and atmospheric compositions and their impacts simultaneously involving many sophisticated physical and chemical processes but also at high spatiotemporal resolution. Global atmospheric simulation of meteorology and atmospheric compositions simultaneously at spatial resolutions of a few kilometers remains challenging due to its intensive computational and input/output (I/O) requirement. Through multi-dimension-parallelism structuring, aggressive and finer-grained optimizing,

Скрыть анонс
07.12.2021
10:56 Technology.org Artificial intelligence supercomputer to ‘accelerate research’ at Case Western Reserve University

More than 250 researchers across nearly two dozen research groups—from computer science to materials science to robotics will

Скрыть анонс
05.12.2021
10:51 Technology.org Identifying proteins using nanopores and supercomputers

The amount and type of proteins human cells produce provide important details about a person’s health and how

Скрыть анонс
26.11.2021
18:59 RT.com Can a Russian supercomputer help a national chess hero to win the world title?

Russian chess world championship hopeful Ian Nepomniachtchi is up against it when facing the dominant Magnus Carlsen in Dubai. But could a supercomputer help him to realize his potential? Read Full Article at RT.com

Скрыть анонс
25.11.2021
18:08 WhatReallyHappened.com More than 300 exoplanets have been discovered in deep space thanks to a newly created algorithm using data from NASA's spacecraft and supercomputer

An additional 301 exoplanets have been confirmed, thanks to a new deep learning algorithm, NASA said. The significant addition to the ledger was made possible by the ExoMiner deep neural network, which was created using data from NASA's Kepler spacecraft and its follow-on, K2. It uses the space agency's supercomputer, Pleiades and is capable of deciphering the difference between real exoplanets and 'false positives.'

Скрыть анонс
22.11.2021
07:29 Arxiv.org CS Optimisation of job scheduling for supercomputers with burst buffers. (arXiv:2111.10200v1 [cs.PF])

The ever-increasing gap between compute and I/O performance in HPC platforms, together with the development of novel NVMe storage devices (NVRAM), led to the emergence of the burst buffer concept - an intermediate persistent storage layer logically positioned between random-access main memory and a parallel file system. Since the appearance of this technology, numerous supercomputers have been equipped with burst buffers exploring various architectures. Despite the development of real-world architectures as well as research concepts, Resource and Job Management Systems, such as Slurm, provide only marginal support for scheduling jobs with burst buffer requirements. This research is primarily motivated by the alerting observation that burst buffers are omitted from reservations in the procedure of backfilling in existing job schedulers. In this dissertation, we forge a detailed supercomputer simulator based on Batsim and SimGrid, which is capable of simulating I/O contention and I/O

Скрыть анонс
18.11.2021
20:51 WhatReallyHappened.com Microsoft now has one of the world's fastest supercomputers (and no, it doesn't run on Windows)

A Microsoft Azure supercomputer dubbed 'Voyager-EUS2' has made it into the rankings of the world's 10 fastest machines. Microsoft's supercomputer, with a benchmark speed of 30 Petaflops per second (Pflop/s) is still well behind China's Tianhe-2A and the US Department of Energy's IBM-based Summit supercomputer, but it's the only major cloud provider with a supercomputer ranked in the top 10 in the high performance computing (HPC) Top500 list.

Скрыть анонс
15:37 Zdnet.com Microsoft now has one of the world's fastest supercomputers (and no, it doesn't run on Windows)

Microsoft makes it into the top 10 fastest supercomputers in the world.

Скрыть анонс
11:23 Arxiv.org CS An energy-efficient scheduling algorithm for shared facility supercomputer centers. (arXiv:2111.08978v1 [cs.DC])

The evolution of high-performance computing is associated with the growth of energy consumption. Performance of cluster computes (is increased via rising in performance and the number of used processors, GPUs, and coprocessors. An increment in the number of computing elements results in significant growth of energy consumption. Power grids limits for supercomputer centers (SCC) are driving the transition to more energy-efficient solutions. Often upgrade of computing resources is done step-by-step, i.e. parts of older supercomputers are removed from service and replaced with newer ones. A single SCC at any time can operate several computing systems with different performance and power consumption. That is why the problem of scheduling parallel programs execution on SCC resources to optimize energy consumption and minimize the increase in execution time (energy-efficient scheduling) is important. The goal of the presented work was the development of a new energy-efficient algorithm for

Скрыть анонс
17.11.2021
21:35 Zdnet.com MLCommons unveils a new way to evaluate the world's fastest supercomputers

The new MLPerf machine learning metric for supercomputing is designed to capture the aggregate ML capabilities of a whole supercomputer

Скрыть анонс
18:03 SingularityHub.Com Nvidia’s New Supercomputer Will Create a ‘Digital Twin’ of Earth to Fight Climate Change

It’s crunch time on climate change, and companies, governments, philanthropists, and NGOs around the world are starting to take action, be it through donating huge sums of money to the cause, building a database for precise tracking of carbon emissions, creating a plan for a clean hydrogen economy, or advocating for solar geoengineering—among many other […]

Скрыть анонс
16.11.2021
12:00 Technology.org Updated exascale system for earth simulations

A new version of the Energy Exascale Earth System Model (E3SM) is two times faster than its earlier

Скрыть анонс
10.11.2021
21:14 ScienceDaily.com Identifying individual proteins using nanopores and supercomputers

The amount and types of proteins our cells produce tell us important details about our health. Researchers have shown that it is possible to identify individual proteins with single-amino acid resolution and nearly 100% accuracy. Their method uses nanopores -- engineered openings that generate an electrical signal when molecules are pulled through by a specific enzyme.

Скрыть анонс
03.11.2021
08:54 News-Medical.Net Scientists tackle antibiotic resistance by using supercomputers

Scientists may have made a giant leap in fighting the biggest threat to human health by using supercomputing to keep pace with the impressive ability of diseases to evolve.

Скрыть анонс
29.10.2021
08:05 Arxiv.org CS Towards Large-Scale Rendering of Simulated Crops for Synthetic Ground Truth Generation on Modular Supercomputers. (arXiv:2110.14946v1 [cs.CV])

Computer Vision problems deal with the semantic extraction of information from camera images. Especially for field crop images, the underlying problems are hard to label and even harder to learn, and the availability of high-quality training data is low. Deep neural networks do a good job of extracting the necessary models from training examples. However, they rely on an abundance of training data that is not feasible to generate or label by expert annotation. To address this challenge, we make use of the Unreal Engine to render large and complex virtual scenes. We rely on the performance of individual nodes by distributing plant simulations across nodes and both generate scenes as well as train neural networks on GPUs, restricting node communication to parallel learning.

Скрыть анонс
28.10.2021
10:48 Arxiv.org CS Closing the "Quantum Supremacy" Gap: Achieving Real-Time Simulation of a Random Quantum Circuit Using a New Sunway Supercomputer. (arXiv:2110.14502v1 [quant-ph])

We develop a high-performance tensor-based simulator for random quantum circuits(RQCs) on the new Sunway supercomputer. Our major innovations include: (1) a near-optimal slicing scheme, and a path-optimization strategy that considers both complexity and compute density; (2) a three-level parallelization scheme that scales to about 42 million cores; (3) a fused permutation and multiplication design that improves the compute efficiency for a wide range of tensor contraction scenarios; and (4) a mixed-precision scheme to further improve the performance. Our simulator effectively expands the scope of simulatable RQCs to include the 10*10(qubits)*(1+40+1)(depth) circuit, with a sustained performance of 1.2 Eflops (single-precision), or 4.4 Eflops (mixed-precision)as a new milestone for classical simulation of quantum circuits; and reduces the simulation sampling time of Google Sycamore to 304 seconds, from the previously claimed 10,000 years.

Скрыть анонс
09:41 Arxiv.org Physics Semi-Lagrangian 4d, 5d, and 6d kinetic plasma simulation on large scale GPU equipped supercomputer. (arXiv:2110.14557v1 [physics.comp-ph])

Running kinetic plasma physics simulations using grid-based solvers is very demanding both in terms of memory as well as computational cost. This is primarily due to the up to six-dimensional phase space and the associated unfavorable scaling of the computational cost as a function of grid spacing (often termed the curse of dimensionality). In this paper, we present 4d, 5d, and 6d simulations of the Vlasov--Poisson equation with a split-step semi-Lagrangian discontinuous Galerkin scheme on graphic processing units (GPUs). The local communication pattern of this method allows an efficient implementation on large-scale GPU-based systems and emphasizes the importance of considering algorithmic and high-performance computing aspects in unison. We demonstrate a single node performance above 2 TB/s effective memory bandwidth (on a node with 4 A100 GPUs) and show excellent scaling (parallel efficiency between 30% and 67%) for up to 1536 A100 GPUs on JUWELS Booster.

Скрыть анонс
25.10.2021
04:42 Arxiv.org CS EXSCALATE: An extreme-scale in-silico virtual screening platform to evaluate 1 trillion compounds in 60 hours on 81 PFLOPS supercomputers. (arXiv:2110.11644v1 [cs.DC])

The social and economic impact of the COVID-19 pandemic demands the reduction of the time required to find a therapeutic cure. In the contest of urgent computing, we re-designed the Exscalate molecular docking platform to benefit from heterogeneous computation nodes and to avoid scaling issues. We deployed the Exscalate platform on two top European supercomputers (CINECA-Marconi100 and ENI-HPC5), with a combined computational power of 81 PFLOPS, to evaluate the interaction between 70 billions of small molecules and 15 binding-sites of 12 viral proteins of Sars-Cov2. The experiment lasted 60 hours and overall it performed a trillion of evaluations.

Скрыть анонс
21.10.2021
18:29 Phys.org Supercomputer simulations reveal how protein crowding in cells impacts interactions

Supercomputer simulations by RIKEN researchers have revealed how drug binding to a protein target changes as the surrounding environment becomes more cluttered with other proteins. These simulations could help improve drug development since they shed light on why some drugs work in theory but flop in practice.

Скрыть анонс
16:14 Zdnet.com Nvidia adds GeForce Now RTX 3080 subscription: 'Gamers deserve a supercomputer too'

Nvidia's upgrade to GeForce Now is powered by what the company calls a SuperPOD, which has more than 1,000 GPUs delivering more than 39 petaflops of graphics horsepower.

Скрыть анонс
20.10.2021
04:02 Arxiv.org CS Energy-based Accounting Model for Heterogeneous Supercomputers. (arXiv:2110.09987v1 [cs.DC])

In this paper we present a new accounting model for heterogeneous supercomputers. An increasing number of supercomputing centres adopt heterogeneous architectures consisting of CPUs and hardware accelerators for their systems. Accounting models using the core hour as unit of measure are redefined to provide an appropriate charging rate based on the computing performance of different processing elements, as well as their energy efficiency and purchase price. In this paper we provide an overview of existing models and define a new model that, while retaining the core hour as a fundamental concept, takes into account the interplay among resources such as CPUs and RAM, and that bases the GPU charging rate on energy consumption. We believe that this model, designed for Pawsey Supercomputing Research Centre's next supercomputer Setonix, has a lot of advantages compared to other models, introducing carbon footprint as a primary driver in determining the allocation of computational workflow on

Скрыть анонс
15.10.2021
21:21 Medscape.Com Supercomputers Mimic Brain Activity, Hunt for COVID Treatments

Data scientists are using a technique known as deep learning, computer algorithms patterned on the brain's signaling networks, to identify combinations of medicines to treat infectious disease.

Скрыть анонс
14.10.2021
23:04 Phys.org Updated Exascale system for Earth simulations is faster than its predecessor

A new version of the Energy Exascale Earth System Model (E3SM) is two times faster than its earlier version released in 2018.

Скрыть анонс
04.10.2021
22:04 Phys.org Supercomputers reveal how X chromosomes fold, deactivate

Using supercomputer-driven dynamic modeling based on experimental data, researchers can now probe the process that turns off one X chromosome in female mammal embryos. This new capability is helping biologists understand the role of RNA and the chromosome's structure in the X inactivation process, leading to a deeper understanding of gene expression and opening new pathways to drug treatments for gene-based disorders and diseases.

Скрыть анонс
First← Previous123456789Previous →