Techh.info/techtechnology hourly

Supercomputers

headlines texts
19.03.2019
22:41 WhatReallyHappened.comUS government teams up with Intel and Cray on $500 million plan to build Project Aurora supercomputer capable of completing 1 quadrillion calculations PER SECOND

A U.S. government-led group is working with chipmaker Intel Corp and Cray Inc to develop and build the nation's fastest computer by 2021 for conducting nuclear weapons and other research, officials said on Monday.
The Department of Energy and the Argonne National Laboratory near Chicago said they are working on a supercomputer dubbed Aurora with Intel, the world's biggest supplier of data center chips, and Cray, which specializes in the ultra-fast machines.

Скрыть анонс
16:57 Telegraph.co.ukUS to create world's most powerful supercomputer capable of 1 quintillion calculations per second

www.telegraph.co.uk for the latest news from the UK and around the world.

Скрыть анонс
16:02 TechnologyReview.comThe US is building a $500m ‘exascale’ computer that will be the world’s most powerful

Скрыть анонс
15:14 Phys.orgNew Argonne supercomputer, built for next-gen AI, will be most powerful in U.S.

The most powerful computer ever built in the United States will make its home at Argonne National Laboratory in 2021, the U.S. Department of Energy and Intel announced today. Aurora, the United States' first exascale computer, will combine unprecedented processing power with the growing potential of artificial intelligence to help solve the world's most important and complex scientific challenges.

Скрыть анонс
14:52 ExtremeTech.comIntel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’

Intel and the DOE have announced the first exascale computer expected to be deployed. Codenamed Aurora, the system should be ready by 2021.
The post Intel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’ appeared first on ExtremeTech.

Скрыть анонс
12:04 NewYork TimesRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
12:03 International Herald TribuneRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
10:34 Technology.orgU.S. Department of Energy and Intel to deliver first exascale supercomputer

Targeted for 2021 delivery, the Argonne National Laboratory supercomputer will enable high-performance computing and artificial intelligence at exascale.

Скрыть анонс
10:16 Ixbt.com Неанонсированные ускорители Intel Xe лягут в основу Aurora — первого суперкомпьютера класса Exascale

Компания Intel на своём сайте опубликовала пресс-релиз, в котором рассказала о том, что совместно с Министерством энергетики США готовится в 2021 представить первый суперкомпьютер класса Exascale, то есть с производительностью свыше 1 exaFLOPS.
Суперкомпьютер получил имя Aurora и разместится в Аргоннской национальной лаборатории. Контракт в целом оценивается в 500 млн долларов.
Но самое интересное — основа суперкомпьютера. Aurora будет включать «новые технологии Intel, разработанные специально для конвергенции искусственного интеллекта и высокопроизводительных вычислений в экстремальных масштабах». К ним относятся в том числе решения на вычислительной архитектуре Intel Xe. Несмотря на то, что в итоге нам рассказали, что Intel Xe — это не бренд, а название процесса перехода компании от энергоэффективной архитектуры GPU к масштабируемой, в данном случае процессорный

Скрыть анонс
05:22 Gizmag Intel's next-gen supercomputer to usher in exascale era in 2021


The next generation of supercomputers has an official start date. Intel and the US Department of Energy (DOE) are teaming up to deliver the world's first exascale supercomputer in 2021, giving a huge boost to many different fields of research. Named Aurora, the new system will be a thousand times more powerful than the petascale generation that began in 2008 and is still in wide use today.
.. Continue Reading Intel's next-gen supercomputer to usher in exascale era in 2021 Category: Computers Tags: Computers Cray Data Deep Learning Intel Supercomputer US Department of Energy

Скрыть анонс
18.03.2019
23:37 Zdnet.comU.S. Department of Energy plans exaFlop supercomputer in 2021

The effort will leverage Cray's Shasta supercomputing platform as well as Intel technology.

Скрыть анонс
22:51 ScienceNewsDaily.orgAmerica’s first exascale supercomputer to be built by 2021

Details of America’s next-generation supercomputer were revealed at a ceremony attended by Secretary of Energy Rick Perry and Senator Dick Durbin at Argonne National Laboratory ...

Скрыть анонс
22:16 NYT TechnologyRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
15.03.2019
15:45 Phys.orgHandling trillions of supercomputer files just got simpler

A new distributed file system for high-performance computing available today via the software collaboration site GitHub provides unprecedented performance for creating, updating and managing extreme numbers of files.

Скрыть анонс
05.03.2019
15:03 LiveScience.comPhysicists Used Supercomputers to Map the Bone-Crushing Pressures Hiding Inside Protons

If you shrank yourself down and entered a proton, you'd experience among the most intense pressures found anywhere in the universe.

Скрыть анонс
21.02.2019
15:02 Technology.orgDTU boasts top-performing supercomputers

Over a five-year period, DTU will invest close to EUR 9.4 million (DKK 70 million) in upgrading and

Скрыть анонс
11:41 Arxiv.org CS'Zhores' -- Petaflops supercomputer for data-driven modeling, machine learning and artificial intelligence installed in Skolkovo Institute of Science and Technology. (arXiv:1902.07490v1 [cs.DC])

The Petaflops supercomputer "Zhores" recently launched in the "Center for Computational and Data-Intensive Science and Engineering" (CDISE) of Skolkovo Institute of Science and Technology (Skoltech) opens up new exciting opportunities for scientific discoveries in the institute especially in the areas of data-driven modeling, machine learning and artificial intelligence. This supercomputer utilizes the latest generation of Intel and NVidia processors to provide resources for the most compute intensive tasks of the Skoltech scientists working in digital pharma, predictive analytics, photonics, material science, image processing, plasma physics and many more. Currently it places 6th in the Russian and CIS TOP-50 (2018) supercomputer list. In this article we summarize the cluster properties and discuss the measured performance and usage modes of this scientific instrument in

Скрыть анонс
19.02.2019
09:35 Arxiv.org CSENBB Processor: Towards the ExaScale Numerical Brain Box [Position Paper]. (arXiv:1902.06655v1 [cs.AR])

ExaScale systems will be a key driver for simulations that are essential for advance of science and economic growth. We aim to present a new concept of microprocessor for floating-point computations useful for being a basic building block of ExaScale systems and beyond. The proposed microprocessor architecture has a frontend for programming interface based on the concept of event-driven simulation. The user program is executed as an event-driven simulation using a hardware/software co-designed simulator. This is the flexible part of the system. The back-end exploits the concept of uniform topology as in a brain: a massive packet switched interconnection network with flit credit-based flow control with virtual channels that incorporates seamlessly communication, arithmetic and storage. Floating-point computations are incorporated as on-line arithmetic operators in the output ports of the

Скрыть анонс
24.01.2019
22:52 Phys.orgPhysicists use supercomputers and AI to create the most accurate model yet of black hole mergers

One of the most cataclysmic events to occur in the cosmos involves the collision of two black holes. Formed from the deathly collapse of massive stars, black holes are incredibly compact—a person standing near a stellar-mass black hole would feel gravity about a trillion times more strongly than they would on Earth. When two objects of this extreme density spiral together and merge, a fairly common occurrence in space, they radiate more power than all the stars in the universe.

Скрыть анонс
02:49 WhatReallyHappened.comIRS Becoming Big Brother With $99-Million Supercomputer – will give the agency the “unprecedented ability to track the lives and transactions of tens of millions of American citizens”

Скрыть анонс
07.01.2019
15:09 AzoRobotics.comMaximum Computing Power and Flexibility with AI-Capable Supercomputer ZF ProAI

ZF launched the newest model of its automotive supercomputer ZF ProAI right before the start of the 2019 Consumer Electronics Show (CES). The ZF ProAI RoboThink central control unit offers the maximum...

Скрыть анонс
03.01.2019
18:36 WhatReallyHappened.comThis million-core supercomputer inspired by the human brain breaks all the rules

For all their fleshly failings, human brains are the model that computer engineers have always sought to emulate: huge processing power that's both surprisingly energy efficient, and available in a tiny form factor. But late last year, in an unprepossessing former metal works in Manchester, one machine became the closest thing to an artificial human brain there is.
The one-million core SpiNNaker -- short for Spiking Neural Network Architecture -- is the culmination of decades of work and millions of pounds of investment. The result: a massively parallel supercomputer designed to mimic the workings of the human brain, which it's hoped will give neuroscientists a new understanding of how the mind works and open up new avenues of medical research.

Скрыть анонс
15:52 Zdnet.comThis million-core supercomputer inspired by the human brain breaks all the rules

SpiNNaker's spiking neural network mimics the human brain, and could fuel breakthroughs in robotics and health.

Скрыть анонс
17.12.2018
19:05 Phys.orgTeam wins major supercomputer time to study the edge of fusion plasmas

The U.S. Department of Energy (DOE) has awarded major computer hours on three leading supercomputers, including the world's fastest, to a team led by C.S. Chang of the DOE's Princeton Plasma Physics Laboratory (PPPL). The team is addressing issues that must be resolved for successful operation of ITER, the international experiment under construction in France to demonstrate the feasibility of producing fusion energy—the power that drives the sun and stars—in a magnetically controlled fusion facility called a "tokamak."

Скрыть анонс
12.12.2018
15:11 Zdnet.comThe rise, fall, and rise of the supercomputer in the cloud era

Though the personal computer was born from garage projects, the supercomputer had been declining to the back of the garage. That's until a handful of trends conspired to poke the reset button for the industry. Now the race is back on.

Скрыть анонс
10.12.2018
14:44 Phys.orgSupercomputers without waste heat

Generally speaking, magnetism and the lossless flow of electrical current ("superconductivity") are competing phenomena that cannot coexist in the same sample. However, for building supercomputers, synergetically combining both states comes with major advantages as compared to today's semiconductor technology, characterized by high power consumption and heat production. Researchers from the Department of Physics at the University of Konstanz have now demonstrated that the lossless electrical transfer of magnetically encoded information is possible. This finding enables enhanced storage density on integrated circuit chips and significantly reduces the energy consumption of computing centres. The results of this study have been published in the current issue of the scientific journal Nature Communications.

Скрыть анонс
07.12.2018
22:48 ScienceDaily.comSupercomputers without waste heat

Physicists explore superconductivity for information processing.

Скрыть анонс
18:18 Nanowerk.comSupercomputers without waste heat

Physicists explore superconductivity for information processing.

Скрыть анонс
06.12.2018
17:16 Phys.orgLIGO supercomputer upgrade will speed up groundbreaking astrophysics research

In 2016, an international team of scientists found definitive evidence—tiny ripples in space known as gravitational waves—to support one of the last remaining untested predictions of Einstein's theory of general relativity. The team used the Laser Interferometer Gravitational-Wave Observatory (LIGO), which has since made several gravitational wave discoveries. Each discovery was possible in part because of a global network of supercomputer clusters, one of which is housed at Penn State. Researchers use this network, known as the LIGO Data Grid, to analyze the gravitational wave data.

Скрыть анонс
05.12.2018
18:17 Telegraph.co.ukUK supercomputer gives African farmers early warning of pests and blights  

Скрыть анонс
07:56 Arxiv.org PhysicsPushing Back the Limit of Ab-initio Quantum Transport Simulations on Hybrid Supercomputers. (arXiv:1812.01396v1 [physics.comp-ph])

The capabilities of CP2K, a density-functional theory package and OMEN, a nano-device simulator, are combined to study transport phenomena from first-principles in unprecedentedly large nanostructures. Based on the Hamiltonian and overlap matrices generated by CP2K for a given system, OMEN solves the Schroedinger equation with open boundary conditions (OBCs) for all possible electron momenta and energies. To accelerate this core operation a robust algorithm called SplitSolve has been developed. It allows to simultaneously treat the OBCs on CPUs and the Schroedinger equation on GPUs, taking advantage of hybrid nodes. Our key achievements on the Cray-XK7 Titan are (i) a reduction in time-to-solution by more than one order of magnitude as compared to standard methods, enabling the simulation of structures with more than 50000 atoms, (ii) a parallel efficiency of 97% when scaling from 756 up to

Скрыть анонс
01.12.2018
00:30 ScienceDaily.comA new way to see stress -- using supercomputers

Supercomputer simulations show that at the atomic level, material stress doesn't behave symmetrically. Widely-used atomic stress formulae significantly underestimate stress near stress concentrators such as dislocation core, crack tip, or interface, in a material under deformation. Supercomputers simulate force interactions of Lennard-Jones perfect single crystal of 240,000 atoms. Study findings could help scientists design new materials such as glass or metal that doesn't ice up.

Скрыть анонс
30.11.2018
20:42 Phys.orgA new way to see stress—using supercomputers

It's easy to take a lot for granted. Scientists do this when they study stress, the force per unit area on an object. Scientists handle stress mathematically by assuming it to have symmetry. That means the components of stress are identical if you transform the stressed object with something like a turn or a flip. Supercomputer simulations show that at the atomic level, material stress doesn't behave symmetrically. The findings could help scientists design new materials such as glass or metal that doesn't ice up.

Скрыть анонс
29.11.2018
09:42 Arxiv.org CSThe L-CSC cluster: Optimizing power efficiency to become the greenest supercomputer in the world in the Green500 list of November 2014. (arXiv:1811.11475v1 [cs.PF])

The L-CSC (Lattice Computer for Scientific Computing) is a general purpose compute cluster built with commodity hardware installed at GSI. Its main operational purpose is Lattice QCD (LQCD) calculations for physics simulations. Quantum Chromo Dynamics (QCD) is the physical theory describing the strong force, one of the four known fundamental interactions in the universe. L-CSC leverages a multi-GPU design accommodating the huge demand of LQCD for memory bandwidth. In recent years, heterogeneous clusters with accelerators such as GPUs have become more and more powerful while supercomputers in general have shown enormous increases in power consumption making electricity costs and cooling a significant factor in the total cost of ownership. Using mainly GPUs for processing, L-CSC is very power-efficient, and its architecture was optimized to provide the greatest possible power efficiency. This

Скрыть анонс
22.11.2018
10:33 Phys.orgMeet Michael, the supercomputer designed to accelerate UK research for EV batteries

A new supercomputer designed to speed up research on two of the UK's most important battery research projects has been installed at University College London (UCL). Named Michael, after the UK's most famous battery scientist, Michael Faraday, the supercomputer will reach 265 teraflops at peak performance.

Скрыть анонс
00:11 WhatReallyHappened.comMeet the new supercomputer behind the US nuclear arsenal

Скрыть анонс
20.11.2018
13:19 Arxiv.org StatisticsImage Classification at Supercomputer Scale. (arXiv:1811.06992v1 [cs.LG])

Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3% accuracy in 2.2 minutes on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images/second and no accuracy drop.

Скрыть анонс
12:46 Arxiv.org CSImage Classification at Supercomputer Scale. (arXiv:1811.06992v1 [cs.LG])

Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3% accuracy in 2.2 minutes on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images/second and no accuracy drop.

Скрыть анонс
19.11.2018
18:03 SingularityHub.ComThe SpiNNaker Supercomputer, Modeled After the Human Brain, Is Up and Running

We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design. The brain is the most complex machine in the known universe, […]

Скрыть анонс
14.11.2018
13:23 Euronews.NetNew weather supercomputer to be installed in Bologna

A next-generation supercomputer is set to be installed in Bologna, Italy. The new system could help predict the weather with more accuracy, giving people a better chance of preparing for high-impact events such as windstorms or floods.

Скрыть анонс
13.11.2018
10:35 Arxiv.org CSScalability Evaluation of Iterative Algorithms Used for Supercomputer Simulation of Physical processes. (arXiv:1811.04276v1 [cs.DC])

The paper is devoted to the development of a methodology for evaluating the scalability of compute-intensive iterative algorithms used in simulating complex physical processes on supercomputer systems. The proposed methodology is based on the BSF (Bulk Synchronous Farm) parallel computation model, which makes it possible to predict the upper scalability bound of an iterative algorithm in early phases of its design. The BSF model assumes the representation of the algorithm in the form of operations on lists using high-order functions. Two classes of representations are considered: BSF-M (Map BSF) and BSF-MR (Map-Reduce BSF). The proposed methodology is described by the example of the solution of the system of linear equations by the Jacobi method. For the Jacobi method, two iterative algorithms are constructed: Jacobi-M based on the BSF-M representation and Jacobi-MR based on the BSF-MR

Скрыть анонс
08:32 Technology.orgSierra reaches higher altitudes, takes No. 2 spot on list of world’s fastest supercomputers

Sierra, Lawrence Livermore National Laboratory’s (LLNL) newest supercomputer, rose to second place on the list of the world’s

Скрыть анонс
12.11.2018
20:53 ScienceNewsDaily.orgUS overtakes China in top supercomputer list

A new list of the world's most powerful machines puts the US in the top two spots.

Скрыть анонс
19:55 Zdnet.comUS now claims world's top two fastest supercomputers

According to the Top500 List, IBM-built supercomputers Summit and Sierra have dethroned China's Sunway TaihuLight in terms of performance power.

Скрыть анонс
07.11.2018
06:07 Arxiv.org StatisticsMesh-TensorFlow: Deep Learning for Supercomputers. (arXiv:1811.02084v1 [cs.LG])

Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to discover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to

Скрыть анонс
05:56 Arxiv.org CSDefining Big Data Analytics Benchmarks for Next Generation Supercomputers. (arXiv:1811.02287v1 [cs.PF])

The design and construction of high performance computing (HPC) systems relies on exhaustive performance analysis and benchmarking. Traditionally this activity has been geared exclusively towards simulation scientists, who, unsurprisingly, have been the primary customers of HPC for decades. However, there is a large and growing volume of data science work that requires these large scale resources, and as such the calls for inclusion and investments in data for HPC have been increasing. So when designing a next generation HPC platform, it is necessary to have HPC-amenable big data analytics benchmarks. In this paper, we propose a set of big data analytics benchmarks and sample codes designed for testing the capabilities of current and next generation supercomputers.

Скрыть анонс
05:56 Arxiv.org CSMesh-TensorFlow: Deep Learning for Supercomputers. (arXiv:1811.02084v1 [cs.LG])

Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to discover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to

Скрыть анонс
06.11.2018
23:05 Gizmag Million-core neuromorphic supercomputer could simulate an entire mouse brain


After 12 years of work, researchers at the University of Manchester in England have completed construction of a "SpiNNaker" (Spiking Neural Network Architecture) supercomputer. It can simulate the internal workings of up to a billion neurons through a whopping one million processing units.
.. Continue Reading Million-core neuromorphic supercomputer could simulate an entire mouse brain Category: Computers Tags: Brain Neuroscience Supercomputer University of Manchester

Скрыть анонс
05.11.2018
21:37 ScientificAmerican.ComA New Supercomputer Is the World's Fastest Brain-Mimicking Machine

The computer has one million processors and 1,200 interconnected circuit boards -- Read more on ScientificAmerican.com

Скрыть анонс
15:20 LiveScience.comNew Supercomputer with 1 Million Processors Is World's Fastest Brain-Mimicking Machine

A supercomputer that "thinks" like a brain can simulate neural activity in real time.

Скрыть анонс
02.11.2018
22:54 ExtremeTech.comNASA Will Use ISS Supercomputer for Science Experiments

It was only there for a test run, but now the agency plans to use it for processing data and running experiments.
The post NASA Will Use ISS Supercomputer for Science Experiments appeared first on ExtremeTech.

Скрыть анонс
18:45 Telegraph.co.uk'Human brain' supercomputer switched on for the first time

www.telegraph.co.uk for the latest news from the UK and around the world.

Скрыть анонс
18:24 CNN HealthA brain-like supercomputer could help Siri understand your accent

Hey Siri, listen up. A multitasking supercomputer that attempts to mimic the human brain was switched on Friday -- and it could be used to help virtual assistants like Apple's Siri and Amazon's Alexa understand your accent.

Скрыть анонс
09:46 News-Medical.NetWorld's largest neuromorphic supercomputer being switched on for the first time

The world's largest neuromorphic supercomputer designed and built to work in the same way a human brain does has been fitted with its landmark one-millionth processor core and is being switched on for the first time.

Скрыть анонс
31.10.2018
15:07 ExtremeTech.comNvidia Tesla, AMD Epyc to Power New Berkeley Supercomputer

Nvidia and AMD are the big winners in a new supercomputer announcement that will put Epyc and Tesla silicon in Cray's latest Shasta system.
The post Nvidia Tesla, AMD Epyc to Power New Berkeley Supercomputer appeared first on ExtremeTech.

Скрыть анонс
30.10.2018
21:12 Zdnet.comUS Energy Dept. announces new Nvidia-powered supercomputer

The Perlmutter will more than triple the computational power currently available at the National Energy Research Scientific Computing (NERSC) Center.

Скрыть анонс
07:02 Arxiv.org CSFFT, FMM, and Multigrid on the Road to Exascale: performance challenges and opportunities. (arXiv:1810.11883v1 [cs.DC])

FFT, FMM, and multigrid methods are widely used fast and highly scalable solvers for elliptic PDEs. However, emerging large-scale computing systems are introducing challenges in comparison to current petascale computers. Recent efforts have identified several constraints in the design of exascale software that include massive concurrency, resilience management, exploiting the high performance of heterogeneous systems, energy efficiency, and utilizing the deeper and more complex memory hierarchy expected at exascale. In this paper, we perform a model-based comparison of the FFT, FMM, and multigrid methods in the context of these projected constraints. In addition we use performance models to offer predictions about the expected performance on upcoming exascale system configurations based on current technology trends.

Скрыть анонс
29.10.2018
09:02 Technology.orgLawrence Livermore unveils NNSA’s Sierra, world’s third fastest supercomputer

The Department of Energy’s National Nuclear Security Administration (NNSA), Lawrence Livermore National Laboratory (LLNL) and its industry partners

Скрыть анонс
24.10.2018
21:44 ScienceMag.orgThree Chinese teams join race to build the world’s fastest supercomputer

Exascale computers promise dramatic advances in climate modeling, genetics studies, and artificial intelligence

Скрыть анонс
23.10.2018
10:25 NewScientist.ComTiny supercomputers could be made from the skeleton inside your cells

Building a computer out of the skeletons that hold our cells together could make them smaller and far more energy efficient

Скрыть анонс
17.10.2018
15:36 ScienceDaily.comSupermassive black holes and supercomputers

The universe's deep past is beyond the reach of even the mighty Hubble Space Telescope. But a new review explains how creation of the first stars and galaxies is nevertheless being mapped in detail, with the aid of computer simulations and theoretical models -- and how a new generation of supercomputers and software is being built that will fill in the gaps.

Скрыть анонс
09:35 Nanowerk.comSupermassive black holes and supercomputers

Researchers reveal the story of the oldest stars and galaxies, compiled from 20 years of simulating the early universe.

Скрыть анонс
15.10.2018
16:35 Technology.orgSupercomputer predicts optical properties of complex hybrid materials

Materials scientists at Duke University computationally predicted the electrical and optical properties of semiconductors made from extended organic

Скрыть анонс
08.10.2018
20:50 Phys.orgSupercomputer predicts optical and thermal properties of complex hybrid materials

Materials scientists at Duke University computationally predicted the electrical and optical properties of semiconductors made from extended organic molecules sandwiched by inorganic structures.

Скрыть анонс
05.10.2018
10:37 Arxiv.org CSExascale Deep Learning for Climate Analytics. (arXiv:1810.01993v1 [cs.DC])

We extract pixel-level masks of extreme weather patterns using variants of Tiramisu and DeepLabv3+ neural networks. We describe improvements to the software frameworks, input pipeline, and the network training algorithms necessary to efficiently scale deep learning on the Piz Daint and Summit systems. The Tiramisu network scales to 5300 P100 GPUs with a sustained throughput of 21.0 PF/s and parallel efficiency of 79.0%. DeepLabv3+ scales up to 27360 V100 GPUs with a sustained throughput of 325.8 PF/s and a parallel efficiency of 90.7% in single precision. By taking advantage of the FP16 Tensor Cores, a half-precision version of the DeepLabv3+ network achieves a peak and sustained throughput of 1.13 EF/s and 999.0 PF/s respectively.

Скрыть анонс
04.10.2018
06:14 Arxiv.org PhysicsSimulating the weak death of the neutron in a femtoscale universe with near-Exascale computing. (arXiv:1810.01609v1 [hep-lat])

The fundamental particle theory called Quantum Chromodynamics (QCD) dictates everything about protons and neutrons, from their intrinsic properties to interactions that bind them into atomic nuclei. Quantities that cannot be fully resolved through experiment, such as the neutron lifetime (whose precise value is important for the existence of light-atomic elements that make the sun shine and life possible), may be understood through numerical solutions to QCD. We directly solve QCD using Lattice Gauge Theory and calculate nuclear observables such as neutron lifetime. We have developed an improved algorithm that exponentially decreases the time-to solution and applied it on the new CORAL supercomputers, Sierra and Summit. We use run-time autotuning to distribute GPU resources, achieving 20% performance at low node count. We also developed optimal application mapping through a job manager, which

Скрыть анонс
06:14 Arxiv.org CSSimulating the weak death of the neutron in a femtoscale universe with near-Exascale computing. (arXiv:1810.01609v1 [hep-lat])

The fundamental particle theory called Quantum Chromodynamics (QCD) dictates everything about protons and neutrons, from their intrinsic properties to interactions that bind them into atomic nuclei. Quantities that cannot be fully resolved through experiment, such as the neutron lifetime (whose precise value is important for the existence of light-atomic elements that make the sun shine and life possible), may be understood through numerical solutions to QCD. We directly solve QCD using Lattice Gauge Theory and calculate nuclear observables such as neutron lifetime. We have developed an improved algorithm that exponentially decreases the time-to solution and applied it on the new CORAL supercomputers, Sierra and Summit. We use run-time autotuning to distribute GPU resources, achieving 20% performance at low node count. We also developed optimal application mapping through a job manager, which

Скрыть анонс
01.10.2018
23:28 Nanowerk.comComing soon to exascale computing: Software for chemistry of catalysis

A new 4-year project aims to develop software that will bring the power of exascale computers to the computational study and design of catalytic materials.

Скрыть анонс
27.09.2018
04:10 Arxiv.org CSProgramming at Exascale: Challenges and Innovations. (arXiv:1809.10023v1 [cs.DC])

Supercomputers become faster as hardware and software technologies continue to evolve. Current supercomputers are capable of 1015 floating point operations per second (FLOPS) that called Petascale system. The High Performance Computer (HPC) community is Looking forward to the system with capability of 1018 (FLOPS) that is called Exascale. Having a system to thousand times faster than the previous one produces challenges to the high performance computer (HPC) community. These challenges require innovation in software and hardware. In this paper, the challenges posed for programming at Exascale systems are reviewed and the developments in the main programming models and systems are surveyed.

Скрыть анонс
26.09.2018
13:35 Zdnet.comEurope's greenest supercomputer: Why energy-efficient HPC is on the rise

MareNostrum 4 Power9 is Europe's greenest supercomputer but it has nothing to do with being situated in a 19th-century church.

Скрыть анонс
24.09.2018
05:11 Arxiv.org PhysicsTowards a Mini-App for Smoothed Particle Hydrodynamics at Exascale. (arXiv:1809.08013v1 [physics.comp-ph])

The smoothed particle hydrodynamics (SPH) technique is a purely Lagrangian method, used in numerical simulations of fluids in astrophysics and computational fluid dynamics, among many other fields. SPH simulations with detailed physics represent computationally-demanding calculations. The parallelization of SPH codes is not trivial due to the absence of a structured grid. Additionally, the performance of the SPH codes can be, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. This work presents insights into the current performance and functionalities of three SPH codes: SPHYNX, ChaNGa, and SPH-flow. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. To gain such insights, a rotating square patch test was implemented as

Скрыть анонс
05:11 Arxiv.org CSTowards a Mini-App for Smoothed Particle Hydrodynamics at Exascale. (arXiv:1809.08013v1 [physics.comp-ph])

The smoothed particle hydrodynamics (SPH) technique is a purely Lagrangian method, used in numerical simulations of fluids in astrophysics and computational fluid dynamics, among many other fields. SPH simulations with detailed physics represent computationally-demanding calculations. The parallelization of SPH codes is not trivial due to the absence of a structured grid. Additionally, the performance of the SPH codes can be, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. This work presents insights into the current performance and functionalities of three SPH codes: SPHYNX, ChaNGa, and SPH-flow. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. To gain such insights, a rotating square patch test was implemented as

Скрыть анонс
21.09.2018
15:36 Phys.orgAccelerated architecture of America's fastest supercomputer boosts QCD simulations

In pursuit of numerical predictions for exotic particles, researchers are simulating atom-building quark and gluon particles over 70 times faster on Summit, the world's most powerful scientific supercomputer, than on its predecessor Titan at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL). The interactions of quarks and gluons are computed using lattice quantum chromodynamics (QCD)—a computer-friendly version of the mathematical framework that describes these strong-force interactions.

Скрыть анонс
10.09.2018
15:06 ExtremeTech.comJapan’s New Supercomputer Is the Fastest Ever for Astronomy Research

To understand how the universe works, scientists often need to turn to computer simulations. Those simulations are getting more powerful thanks to a new Japanese supercomputer called ATERUI II.
The post Japan’s New Supercomputer Is the Fastest Ever for Astronomy Research appeared first on ExtremeTech.

Скрыть анонс
07.09.2018
09:29 Arxiv.org PhysicsGlimpses of Space-Time Beyond the Singularities Using Supercomputers. (arXiv:1809.01747v1 [physics.comp-ph])

A fundamental problem of Einstein's theory of classical general relativity is the existence of singularities such as the big bang. All known laws of physics end at these boundaries of classical space-time. Thanks to recent developments in quantum gravity, supercomputers are now playing an important role in understanding the resolution of big bang and black hole singularities. Using supercomputers, explorations of the very genesis of space and time from quantum geometry are revealing a novel picture of what lies beyond classical singularities and the new physics of the birth of our universe.

Скрыть анонс
03.09.2018
05:57 Arxiv.org PhysicsTowards Exascale Simulations of the ICM Dynamo with Wombat. (arXiv:1808.10633v1 [physics.comp-ph])

In galaxy clusters, modern radio interferometers observe non-thermal radio sources with unprecedented spatial and spectral resolution. For the first time, the new data allows to infer the structure of the intra-cluster magnetic fields on small scales via Faraday tomography. This leap forward demands new numerical models for the amplification of magnetic fields in cosmic structure formation - the cosmological magnetic dynamo. Here we present a novel numerical approach to astrophyiscal MHD simulations aimed to resolve this small-scale dynamo in future cosmological simulations. As a first step, we implement a fifth order WENO scheme in the new code WOMBAT. We show that this scheme doubles the effective resolution of the simulation and is thus less expensive than common second order schemes. WOMBAT uses a novel approach to parallelization and load balancing developed in collaboration with

Скрыть анонс
31.08.2018
00:48 Popular ScienceUniversity supercomputers are science's unsung heroes, and Texas will get the fastest yet

Technology The machine is called Frontera.
Frontera will be the fastest supercomputer at a university.

Скрыть анонс
29.08.2018
16:55 Phys.orgNew Texas supercomputer to push the frontiers of science

The National Science Foundation (NSF) announced today that it has awarded $60 million to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for the acquisition and deployment of a new supercomputer that will be the fastest at any U.S. university and among the most powerful in the world.

Скрыть анонс
28.08.2018
17:15 Zdnet.comGarvan Institute gets new supercomputer for genomic research

Dell EMC is providing the Garvan Institute of Medical Research with a new HPC system to support genomic research and analysis.

Скрыть анонс
27.08.2018
19:39 Phys.orgArtificial intelligence project to help bring the power of the sun to Earth is picked for first U.S. exascale system

To capture and control the process of fusion that powers the sun and stars in facilities on Earth called tokamaks, scientists must confront disruptions that can halt the reactions and damage the doughnut-shaped devices. Now an artificial intelligence system under development at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University to predict and tame such disruptions has been selected as an Aurora Early Science project by the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

Скрыть анонс
17.08.2018
09:57 Arxiv.org CSLimitations of performance of Exascale Applications and supercomputers they are running on. (arXiv:1808.05338v1 [cs.DC])

The paper highlights that the cooperation of the components of the computing systems receives even more focus in the coming age of exascale computing. It discovers that inherent performance limitations exist and identifies the major critical contributions of the performance on many-many processor systems. The extended and reinterpreted simple Amdahl model describes the behavior of the existing supercomputers surprisingly well, and explains some mystical happenings around high-performance computing. It is pointed out that using the present technology and paradigm only marginal development of performance is possible, and that the major obstacle towards higher performance applications is the 70-years old computing paradigm itself. A way to step forward is also suggested

Скрыть анонс
14.08.2018
21:09 Zdnet.comHPE to build supercomputer for federal renewable energy research

The supercomputer, dubbed "Eagle," was designed for the US government's only lab dedicated completely to energy efficiency and renewable energy.

Скрыть анонс
17:08 Phys.orgDeep learning stretches up to scientific supercomputers

Machine learning, a form of artificial intelligence, enjoys unprecedented success in commercial applications. However, the use of machine learning in high performance computing for science has been limited. Why? Advanced machine learning tools weren't designed for big data sets, like those used to study stars and planets. A team from Intel, National Energy Research Scientific Computing Center (NERSC), and Stanford changed that situation. They developed the first 15-petaflop deep-learning software. They demonstrated its ability to handle large data sets via test runs on the Cori supercomputer.

Скрыть анонс
04:11 Arxiv.org CSInteractive Launch of 16,000 Microsoft Windows Instances on a Supercomputer. (arXiv:1808.04345v1 [cs.DC])

Simulation, machine learning, and data analysis require a wide range of software which can be dependent upon specific operating systems, such as Microsoft Windows. Running this software interactively on massively parallel supercomputers can present many challenges. Traditional methods of scaling Microsoft Windows applications to run on thousands of processors have typically relied on heavyweight virtual machines that can be inefficient and slow to launch on modern manycore processors. This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly and simultaneously launch and run Microsoft Windows applications on thousands of cores on a supercomputer. Specifically, this work demonstrates launching 16,000 Microsoft Windows applications in 5 minutes running on 16,000 processor cores. This capability

Скрыть анонс
11.08.2018
15:19 Technology.orgSupercomputer simulations show new target in HIV-1 replication

XSEDE Anton2, Stampede2 systems model Inositol phosphate interactions with HIV-1 structural proteins. HIV-1 replicates in ninja-like ways. The

Скрыть анонс
10.08.2018
16:09 News-Medical.NetSupercomputer simulations reveal potential therapeutic target in HIV-1 replication

HIV-1 replicates in ninja-like ways. The virus slips through the membrane of vital white blood cells. Inside, HIV-1 copies its genes and scavenges parts to build a protective bubble for its copies.

Скрыть анонс
09.08.2018
22:16 ScienceDaily.comSupercomputer simulations show new target in HIV-1 replication

A new study has found naturally-occurring compound inositol hexakisphosphate (IP6) promotes both assembly and maturation of HIV-1. NSF-XSEDE allocations on the Stampede2 supercomputing system at the Texas Advanced Computing Center and on Anton2 at the Pittsburgh Supercomputing Center simulated atomistic interactions of IP6 molecule with HIV structural proteins. The research opens door for development of new treatments of HIV-1 virus.

Скрыть анонс
26.07.2018
19:12 WhatReallyHappened.comBuilding a global AI supercomputer – The 2018 Microsoft Research Faculty Summit

We live in the age of the intelligent edge and intelligent cloud. Data is collected at the edge using billions of small devices. The data is pre-processed at the edge and shipped in a filtered and aggregated form to the cloud. In the cloud, the data is analyzed and used to train models which are in turn deployed at both the edge and in the cloud to make decisions. This way, computing and AI are infused into virtually all processes of our daily life. The result is a supercomputer at global scale composed of billions of devices with micro-controllers at the edge and millions of servers in the cloud and at the edge.

Скрыть анонс
10:06 Phys.orgNewest supercomputer to help develop fusion energy in international device

Scientists led by Stephen Jardin, principal research physicist and head of the Computational Plasma Physics Group at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL), have won 40 million core hours of supercomputer time to simulate plasma disruptions that can halt fusion reactions and damage fusion facilities, so that scientists can learn how to stop them. The PPPL team will apply its findings to ITER, the international tokamak under construction in France to demonstrate the practicality of fusion energy. The results could help ITER operators mitigate the large-scale disruptions the facility inevitably will face.

Скрыть анонс
24.07.2018
19:57 WhatReallyHappened.comAI everywhere! New method allows full AI on basic laptops! supercomputers could QUADRUPLE neural networks!

Скрыть анонс
19.07.2018
14:49 Phys.orgWorld-first program to stop hacking by supercomputers

IT experts at Monash University have devised the world's leading post-quantum secure privacy-preserving algorithm – so powerful it can thwart attacks from supercomputers of the future.

Скрыть анонс
13.07.2018
16:20 Phys.orgHow to fit a planet inside a computer—developing the energy exascale earth system model

The Earth was apparently losing water.

Скрыть анонс
10:15 Nanowerk.comHow to fit a planet inside a computer: developing the Energy Exascale Earth system model

Researchers have developed a new simulation to help us learn more about Earth's present and future.

Скрыть анонс
06:55 Arxiv.org CSVirtualizing the Stampede2 Supercomputer with Applications to HPC in the Cloud. (arXiv:1807.04616v1 [cs.DC])

Methods developed at the Texas Advanced Computing Center (TACC) are described and demonstrated for automating the construction of an elastic, virtual cluster emulating the Stampede2 high performance computing (HPC) system. The cluster can be built and/or scaled in a matter of minutes on the Jetstream self-service cloud system and shares many properties of the original Stampede2, including: i) common identity management, ii) access to the same file systems, iii) equivalent software application stack and module system, iv) similar job scheduling interface via Slurm.
We measure time-to-solution for a number of common scientific applications on our virtual cluster against equivalent runs on Stampede2 and develop an application profile where performance is similar or otherwise acceptable. For such applications, the virtual cluster provides an effective form of "cloud bursting" with the

Скрыть анонс
11.07.2018
13:41 Technology.orgPrinceton Research Computing introduces newest TIGER supercomputer

At close to six times the power of its predecessor, TIGER is funded by the University provost, the Princeton

Скрыть анонс
05:18 Arxiv.org CSThe SAGE Project: a Storage Centric Approach for Exascale Computing. (arXiv:1807.03632v1 [cs.DC])

SAGE (Percipient StorAGe for Exascale Data Centric Computing) is a European Commission funded project towards the era of Exascale computing. Its goal is to design and implement a Big Data/Extreme Computing (BDEC) capable infrastructure with associated software stack. The SAGE system follows a "storage centric" approach as it is capable of storing and processing large data volumes at the Exascale regime.
SAGE addresses the convergence of Big Data Analysis and HPC in an era of next-generation data centric computing. This convergence is driven by the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors where data needs to be processed, analyzed and integrated into simulations to derive scientific and innovative insights. A first prototype of the SAGE system has been been implemented and installed at the Julich Supercomputing Center. The SAGE

Скрыть анонс
09.07.2018
20:44 Zdnet.comHPE supercomputer used to advance modeling of the mammalian brain

A Swiss research initiative called the Blue Brain Project is using a supercomputer based on the HPE SGI 8600 System to model regions of the mouse brain.

Скрыть анонс
02.07.2018
09:27 Technology.orgSupercomputers Help Design Mutant Enzyme that Eats Plastic Bottles

A dump truck’s worth of plastic empties into the ocean every minute. Worldwide, humankind produces over 300 million

Скрыть анонс
29.06.2018
19:46 ExtremeTech.comJapan Tests Silicon for Exascale Computing in 2021

Fujitsu and Riken are prepping a new leap ahead for exascale computing in Japan -- and the system should be ready by 2021.
The post Japan Tests Silicon for Exascale Computing in 2021 appeared first on ExtremeTech.

Скрыть анонс
28.06.2018
11:09 Phys.orgSupercomputers help design mutant enzyme that eats plastic bottles

A dump truck's worth of plastic empties into the ocean every minute. Worldwide, humankind produces over 300 million tons of plastic each year, much of which is predicted to last centuries to millennia and pollutes both aquatic and terrestrial environments. PET plastic, short for polyethylene terephthalate, is the fourth most-produced plastic and is used to make things like beverage bottles and carpets, the latter essentially not being recycled. Some scientists are hoping to change that, using supercomputers to engineer an enzyme that breaks down PET. They say it's a step on a long road toward recycling PET and other plastics into commercially valuable materials at industrial scale.

Скрыть анонс
First← Previous123456Previous →Last