Techh.info/techtechnology hourly

Supercomputers

headlines texts
14.11.2018
13:23 Euronews.NetNew weather supercomputer to be installed in Bologna

A next-generation supercomputer is set to be installed in Bologna, Italy. The new system could help predict the weather with more accuracy, giving people a better chance of preparing for high-impact events such as windstorms or floods.

Скрыть анонс
13.11.2018
10:35 Arxiv.org CSScalability Evaluation of Iterative Algorithms Used for Supercomputer Simulation of Physical processes. (arXiv:1811.04276v1 [cs.DC])

The paper is devoted to the development of a methodology for evaluating the scalability of compute-intensive iterative algorithms used in simulating complex physical processes on supercomputer systems. The proposed methodology is based on the BSF (Bulk Synchronous Farm) parallel computation model, which makes it possible to predict the upper scalability bound of an iterative algorithm in early phases of its design. The BSF model assumes the representation of the algorithm in the form of operations on lists using high-order functions. Two classes of representations are considered: BSF-M (Map BSF) and BSF-MR (Map-Reduce BSF). The proposed methodology is described by the example of the solution of the system of linear equations by the Jacobi method. For the Jacobi method, two iterative algorithms are constructed: Jacobi-M based on the BSF-M representation and Jacobi-MR based on the BSF-MR

Скрыть анонс
08:32 Technology.orgSierra reaches higher altitudes, takes No. 2 spot on list of world’s fastest supercomputers

Sierra, Lawrence Livermore National Laboratory’s (LLNL) newest supercomputer, rose to second place on the list of the world’s

Скрыть анонс
12.11.2018
20:53 ScienceNewsDaily.orgUS overtakes China in top supercomputer list

A new list of the world's most powerful machines puts the US in the top two spots.

Скрыть анонс
19:55 Zdnet.comUS now claims world's top two fastest supercomputers

According to the Top500 List, IBM-built supercomputers Summit and Sierra have dethroned China's Sunway TaihuLight in terms of performance power.

Скрыть анонс
07.11.2018
06:07 Arxiv.org StatisticsMesh-TensorFlow: Deep Learning for Supercomputers. (arXiv:1811.02084v1 [cs.LG])

Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to discover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to

Скрыть анонс
05:56 Arxiv.org CSDefining Big Data Analytics Benchmarks for Next Generation Supercomputers. (arXiv:1811.02287v1 [cs.PF])

The design and construction of high performance computing (HPC) systems relies on exhaustive performance analysis and benchmarking. Traditionally this activity has been geared exclusively towards simulation scientists, who, unsurprisingly, have been the primary customers of HPC for decades. However, there is a large and growing volume of data science work that requires these large scale resources, and as such the calls for inclusion and investments in data for HPC have been increasing. So when designing a next generation HPC platform, it is necessary to have HPC-amenable big data analytics benchmarks. In this paper, we propose a set of big data analytics benchmarks and sample codes designed for testing the capabilities of current and next generation supercomputers.

Скрыть анонс
05:56 Arxiv.org CSMesh-TensorFlow: Deep Learning for Supercomputers. (arXiv:1811.02084v1 [cs.LG])

Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to discover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to

Скрыть анонс
06.11.2018
23:05 Gizmag Million-core neuromorphic supercomputer could simulate an entire mouse brain


After 12 years of work, researchers at the University of Manchester in England have completed construction of a "SpiNNaker" (Spiking Neural Network Architecture) supercomputer. It can simulate the internal workings of up to a billion neurons through a whopping one million processing units.
.. Continue Reading Million-core neuromorphic supercomputer could simulate an entire mouse brain Category: Computers Tags: Brain Neuroscience Supercomputer University of Manchester

Скрыть анонс
05.11.2018
21:37 ScientificAmerican.ComA New Supercomputer Is the World's Fastest Brain-Mimicking Machine

The computer has one million processors and 1,200 interconnected circuit boards -- Read more on ScientificAmerican.com

Скрыть анонс
15:20 LiveScience.comNew Supercomputer with 1 Million Processors Is World's Fastest Brain-Mimicking Machine

A supercomputer that "thinks" like a brain can simulate neural activity in real time.

Скрыть анонс
02.11.2018
22:54 ExtremeTech.comNASA Will Use ISS Supercomputer for Science Experiments

It was only there for a test run, but now the agency plans to use it for processing data and running experiments.
The post NASA Will Use ISS Supercomputer for Science Experiments appeared first on ExtremeTech.

Скрыть анонс
18:45 Telegraph.co.uk'Human brain' supercomputer switched on for the first time

www.telegraph.co.uk for the latest news from the UK and around the world.

Скрыть анонс
18:24 CNN HealthA brain-like supercomputer could help Siri understand your accent

Hey Siri, listen up. A multitasking supercomputer that attempts to mimic the human brain was switched on Friday -- and it could be used to help virtual assistants like Apple's Siri and Amazon's Alexa understand your accent.

Скрыть анонс
09:46 News-Medical.NetWorld's largest neuromorphic supercomputer being switched on for the first time

The world's largest neuromorphic supercomputer designed and built to work in the same way a human brain does has been fitted with its landmark one-millionth processor core and is being switched on for the first time.

Скрыть анонс
31.10.2018
15:07 ExtremeTech.comNvidia Tesla, AMD Epyc to Power New Berkeley Supercomputer

Nvidia and AMD are the big winners in a new supercomputer announcement that will put Epyc and Tesla silicon in Cray's latest Shasta system.
The post Nvidia Tesla, AMD Epyc to Power New Berkeley Supercomputer appeared first on ExtremeTech.

Скрыть анонс
30.10.2018
21:12 Zdnet.comUS Energy Dept. announces new Nvidia-powered supercomputer

The Perlmutter will more than triple the computational power currently available at the National Energy Research Scientific Computing (NERSC) Center.

Скрыть анонс
07:02 Arxiv.org CSFFT, FMM, and Multigrid on the Road to Exascale: performance challenges and opportunities. (arXiv:1810.11883v1 [cs.DC])

FFT, FMM, and multigrid methods are widely used fast and highly scalable solvers for elliptic PDEs. However, emerging large-scale computing systems are introducing challenges in comparison to current petascale computers. Recent efforts have identified several constraints in the design of exascale software that include massive concurrency, resilience management, exploiting the high performance of heterogeneous systems, energy efficiency, and utilizing the deeper and more complex memory hierarchy expected at exascale. In this paper, we perform a model-based comparison of the FFT, FMM, and multigrid methods in the context of these projected constraints. In addition we use performance models to offer predictions about the expected performance on upcoming exascale system configurations based on current technology trends.

Скрыть анонс
29.10.2018
09:02 Technology.orgLawrence Livermore unveils NNSA’s Sierra, world’s third fastest supercomputer

The Department of Energy’s National Nuclear Security Administration (NNSA), Lawrence Livermore National Laboratory (LLNL) and its industry partners

Скрыть анонс
24.10.2018
21:44 ScienceMag.orgThree Chinese teams join race to build the world’s fastest supercomputer

Exascale computers promise dramatic advances in climate modeling, genetics studies, and artificial intelligence

Скрыть анонс
23.10.2018
10:25 NewScientist.ComTiny supercomputers could be made from the skeleton inside your cells

Building a computer out of the skeletons that hold our cells together could make them smaller and far more energy efficient

Скрыть анонс
17.10.2018
15:36 ScienceDaily.comSupermassive black holes and supercomputers

The universe's deep past is beyond the reach of even the mighty Hubble Space Telescope. But a new review explains how creation of the first stars and galaxies is nevertheless being mapped in detail, with the aid of computer simulations and theoretical models -- and how a new generation of supercomputers and software is being built that will fill in the gaps.

Скрыть анонс
09:35 Nanowerk.comSupermassive black holes and supercomputers

Researchers reveal the story of the oldest stars and galaxies, compiled from 20 years of simulating the early universe.

Скрыть анонс
15.10.2018
16:35 Technology.orgSupercomputer predicts optical properties of complex hybrid materials

Materials scientists at Duke University computationally predicted the electrical and optical properties of semiconductors made from extended organic

Скрыть анонс
08.10.2018
20:50 Phys.orgSupercomputer predicts optical and thermal properties of complex hybrid materials

Materials scientists at Duke University computationally predicted the electrical and optical properties of semiconductors made from extended organic molecules sandwiched by inorganic structures.

Скрыть анонс
05.10.2018
10:37 Arxiv.org CSExascale Deep Learning for Climate Analytics. (arXiv:1810.01993v1 [cs.DC])

We extract pixel-level masks of extreme weather patterns using variants of Tiramisu and DeepLabv3+ neural networks. We describe improvements to the software frameworks, input pipeline, and the network training algorithms necessary to efficiently scale deep learning on the Piz Daint and Summit systems. The Tiramisu network scales to 5300 P100 GPUs with a sustained throughput of 21.0 PF/s and parallel efficiency of 79.0%. DeepLabv3+ scales up to 27360 V100 GPUs with a sustained throughput of 325.8 PF/s and a parallel efficiency of 90.7% in single precision. By taking advantage of the FP16 Tensor Cores, a half-precision version of the DeepLabv3+ network achieves a peak and sustained throughput of 1.13 EF/s and 999.0 PF/s respectively.

Скрыть анонс
04.10.2018
06:14 Arxiv.org PhysicsSimulating the weak death of the neutron in a femtoscale universe with near-Exascale computing. (arXiv:1810.01609v1 [hep-lat])

The fundamental particle theory called Quantum Chromodynamics (QCD) dictates everything about protons and neutrons, from their intrinsic properties to interactions that bind them into atomic nuclei. Quantities that cannot be fully resolved through experiment, such as the neutron lifetime (whose precise value is important for the existence of light-atomic elements that make the sun shine and life possible), may be understood through numerical solutions to QCD. We directly solve QCD using Lattice Gauge Theory and calculate nuclear observables such as neutron lifetime. We have developed an improved algorithm that exponentially decreases the time-to solution and applied it on the new CORAL supercomputers, Sierra and Summit. We use run-time autotuning to distribute GPU resources, achieving 20% performance at low node count. We also developed optimal application mapping through a job manager, which

Скрыть анонс
06:14 Arxiv.org CSSimulating the weak death of the neutron in a femtoscale universe with near-Exascale computing. (arXiv:1810.01609v1 [hep-lat])

The fundamental particle theory called Quantum Chromodynamics (QCD) dictates everything about protons and neutrons, from their intrinsic properties to interactions that bind them into atomic nuclei. Quantities that cannot be fully resolved through experiment, such as the neutron lifetime (whose precise value is important for the existence of light-atomic elements that make the sun shine and life possible), may be understood through numerical solutions to QCD. We directly solve QCD using Lattice Gauge Theory and calculate nuclear observables such as neutron lifetime. We have developed an improved algorithm that exponentially decreases the time-to solution and applied it on the new CORAL supercomputers, Sierra and Summit. We use run-time autotuning to distribute GPU resources, achieving 20% performance at low node count. We also developed optimal application mapping through a job manager, which

Скрыть анонс
01.10.2018
23:28 Nanowerk.comComing soon to exascale computing: Software for chemistry of catalysis

A new 4-year project aims to develop software that will bring the power of exascale computers to the computational study and design of catalytic materials.

Скрыть анонс
27.09.2018
04:10 Arxiv.org CSProgramming at Exascale: Challenges and Innovations. (arXiv:1809.10023v1 [cs.DC])

Supercomputers become faster as hardware and software technologies continue to evolve. Current supercomputers are capable of 1015 floating point operations per second (FLOPS) that called Petascale system. The High Performance Computer (HPC) community is Looking forward to the system with capability of 1018 (FLOPS) that is called Exascale. Having a system to thousand times faster than the previous one produces challenges to the high performance computer (HPC) community. These challenges require innovation in software and hardware. In this paper, the challenges posed for programming at Exascale systems are reviewed and the developments in the main programming models and systems are surveyed.

Скрыть анонс
26.09.2018
13:35 Zdnet.comEurope's greenest supercomputer: Why energy-efficient HPC is on the rise

MareNostrum 4 Power9 is Europe's greenest supercomputer but it has nothing to do with being situated in a 19th-century church.

Скрыть анонс
24.09.2018
05:11 Arxiv.org PhysicsTowards a Mini-App for Smoothed Particle Hydrodynamics at Exascale. (arXiv:1809.08013v1 [physics.comp-ph])

The smoothed particle hydrodynamics (SPH) technique is a purely Lagrangian method, used in numerical simulations of fluids in astrophysics and computational fluid dynamics, among many other fields. SPH simulations with detailed physics represent computationally-demanding calculations. The parallelization of SPH codes is not trivial due to the absence of a structured grid. Additionally, the performance of the SPH codes can be, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. This work presents insights into the current performance and functionalities of three SPH codes: SPHYNX, ChaNGa, and SPH-flow. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. To gain such insights, a rotating square patch test was implemented as

Скрыть анонс
05:11 Arxiv.org CSTowards a Mini-App for Smoothed Particle Hydrodynamics at Exascale. (arXiv:1809.08013v1 [physics.comp-ph])

The smoothed particle hydrodynamics (SPH) technique is a purely Lagrangian method, used in numerical simulations of fluids in astrophysics and computational fluid dynamics, among many other fields. SPH simulations with detailed physics represent computationally-demanding calculations. The parallelization of SPH codes is not trivial due to the absence of a structured grid. Additionally, the performance of the SPH codes can be, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. This work presents insights into the current performance and functionalities of three SPH codes: SPHYNX, ChaNGa, and SPH-flow. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. To gain such insights, a rotating square patch test was implemented as

Скрыть анонс
21.09.2018
15:36 Phys.orgAccelerated architecture of America's fastest supercomputer boosts QCD simulations

In pursuit of numerical predictions for exotic particles, researchers are simulating atom-building quark and gluon particles over 70 times faster on Summit, the world's most powerful scientific supercomputer, than on its predecessor Titan at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL). The interactions of quarks and gluons are computed using lattice quantum chromodynamics (QCD)—a computer-friendly version of the mathematical framework that describes these strong-force interactions.

Скрыть анонс
10.09.2018
15:06 ExtremeTech.comJapan’s New Supercomputer Is the Fastest Ever for Astronomy Research

To understand how the universe works, scientists often need to turn to computer simulations. Those simulations are getting more powerful thanks to a new Japanese supercomputer called ATERUI II.
The post Japan’s New Supercomputer Is the Fastest Ever for Astronomy Research appeared first on ExtremeTech.

Скрыть анонс
07.09.2018
09:29 Arxiv.org PhysicsGlimpses of Space-Time Beyond the Singularities Using Supercomputers. (arXiv:1809.01747v1 [physics.comp-ph])

A fundamental problem of Einstein's theory of classical general relativity is the existence of singularities such as the big bang. All known laws of physics end at these boundaries of classical space-time. Thanks to recent developments in quantum gravity, supercomputers are now playing an important role in understanding the resolution of big bang and black hole singularities. Using supercomputers, explorations of the very genesis of space and time from quantum geometry are revealing a novel picture of what lies beyond classical singularities and the new physics of the birth of our universe.

Скрыть анонс
03.09.2018
05:57 Arxiv.org PhysicsTowards Exascale Simulations of the ICM Dynamo with Wombat. (arXiv:1808.10633v1 [physics.comp-ph])

In galaxy clusters, modern radio interferometers observe non-thermal radio sources with unprecedented spatial and spectral resolution. For the first time, the new data allows to infer the structure of the intra-cluster magnetic fields on small scales via Faraday tomography. This leap forward demands new numerical models for the amplification of magnetic fields in cosmic structure formation - the cosmological magnetic dynamo. Here we present a novel numerical approach to astrophyiscal MHD simulations aimed to resolve this small-scale dynamo in future cosmological simulations. As a first step, we implement a fifth order WENO scheme in the new code WOMBAT. We show that this scheme doubles the effective resolution of the simulation and is thus less expensive than common second order schemes. WOMBAT uses a novel approach to parallelization and load balancing developed in collaboration with

Скрыть анонс
31.08.2018
00:48 Popular ScienceUniversity supercomputers are science's unsung heroes, and Texas will get the fastest yet

Technology The machine is called Frontera.
Frontera will be the fastest supercomputer at a university.

Скрыть анонс
29.08.2018
16:55 Phys.orgNew Texas supercomputer to push the frontiers of science

The National Science Foundation (NSF) announced today that it has awarded $60 million to the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for the acquisition and deployment of a new supercomputer that will be the fastest at any U.S. university and among the most powerful in the world.

Скрыть анонс
28.08.2018
17:15 Zdnet.comGarvan Institute gets new supercomputer for genomic research

Dell EMC is providing the Garvan Institute of Medical Research with a new HPC system to support genomic research and analysis.

Скрыть анонс
27.08.2018
19:39 Phys.orgArtificial intelligence project to help bring the power of the sun to Earth is picked for first U.S. exascale system

To capture and control the process of fusion that powers the sun and stars in facilities on Earth called tokamaks, scientists must confront disruptions that can halt the reactions and damage the doughnut-shaped devices. Now an artificial intelligence system under development at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University to predict and tame such disruptions has been selected as an Aurora Early Science project by the Argonne Leadership Computing Facility, a DOE Office of Science User Facility.

Скрыть анонс
17.08.2018
09:57 Arxiv.org CSLimitations of performance of Exascale Applications and supercomputers they are running on. (arXiv:1808.05338v1 [cs.DC])

The paper highlights that the cooperation of the components of the computing systems receives even more focus in the coming age of exascale computing. It discovers that inherent performance limitations exist and identifies the major critical contributions of the performance on many-many processor systems. The extended and reinterpreted simple Amdahl model describes the behavior of the existing supercomputers surprisingly well, and explains some mystical happenings around high-performance computing. It is pointed out that using the present technology and paradigm only marginal development of performance is possible, and that the major obstacle towards higher performance applications is the 70-years old computing paradigm itself. A way to step forward is also suggested

Скрыть анонс
14.08.2018
21:09 Zdnet.comHPE to build supercomputer for federal renewable energy research

The supercomputer, dubbed "Eagle," was designed for the US government's only lab dedicated completely to energy efficiency and renewable energy.

Скрыть анонс
17:08 Phys.orgDeep learning stretches up to scientific supercomputers

Machine learning, a form of artificial intelligence, enjoys unprecedented success in commercial applications. However, the use of machine learning in high performance computing for science has been limited. Why? Advanced machine learning tools weren't designed for big data sets, like those used to study stars and planets. A team from Intel, National Energy Research Scientific Computing Center (NERSC), and Stanford changed that situation. They developed the first 15-petaflop deep-learning software. They demonstrated its ability to handle large data sets via test runs on the Cori supercomputer.

Скрыть анонс
04:11 Arxiv.org CSInteractive Launch of 16,000 Microsoft Windows Instances on a Supercomputer. (arXiv:1808.04345v1 [cs.DC])

Simulation, machine learning, and data analysis require a wide range of software which can be dependent upon specific operating systems, such as Microsoft Windows. Running this software interactively on massively parallel supercomputers can present many challenges. Traditional methods of scaling Microsoft Windows applications to run on thousands of processors have typically relied on heavyweight virtual machines that can be inefficient and slow to launch on modern manycore processors. This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly and simultaneously launch and run Microsoft Windows applications on thousands of cores on a supercomputer. Specifically, this work demonstrates launching 16,000 Microsoft Windows applications in 5 minutes running on 16,000 processor cores. This capability

Скрыть анонс
11.08.2018
15:19 Technology.orgSupercomputer simulations show new target in HIV-1 replication

XSEDE Anton2, Stampede2 systems model Inositol phosphate interactions with HIV-1 structural proteins. HIV-1 replicates in ninja-like ways. The

Скрыть анонс
10.08.2018
16:09 News-Medical.NetSupercomputer simulations reveal potential therapeutic target in HIV-1 replication

HIV-1 replicates in ninja-like ways. The virus slips through the membrane of vital white blood cells. Inside, HIV-1 copies its genes and scavenges parts to build a protective bubble for its copies.

Скрыть анонс
09.08.2018
22:16 ScienceDaily.comSupercomputer simulations show new target in HIV-1 replication

A new study has found naturally-occurring compound inositol hexakisphosphate (IP6) promotes both assembly and maturation of HIV-1. NSF-XSEDE allocations on the Stampede2 supercomputing system at the Texas Advanced Computing Center and on Anton2 at the Pittsburgh Supercomputing Center simulated atomistic interactions of IP6 molecule with HIV structural proteins. The research opens door for development of new treatments of HIV-1 virus.

Скрыть анонс
26.07.2018
19:12 WhatReallyHappened.comBuilding a global AI supercomputer – The 2018 Microsoft Research Faculty Summit

We live in the age of the intelligent edge and intelligent cloud. Data is collected at the edge using billions of small devices. The data is pre-processed at the edge and shipped in a filtered and aggregated form to the cloud. In the cloud, the data is analyzed and used to train models which are in turn deployed at both the edge and in the cloud to make decisions. This way, computing and AI are infused into virtually all processes of our daily life. The result is a supercomputer at global scale composed of billions of devices with micro-controllers at the edge and millions of servers in the cloud and at the edge.

Скрыть анонс
10:06 Phys.orgNewest supercomputer to help develop fusion energy in international device

Scientists led by Stephen Jardin, principal research physicist and head of the Computational Plasma Physics Group at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL), have won 40 million core hours of supercomputer time to simulate plasma disruptions that can halt fusion reactions and damage fusion facilities, so that scientists can learn how to stop them. The PPPL team will apply its findings to ITER, the international tokamak under construction in France to demonstrate the practicality of fusion energy. The results could help ITER operators mitigate the large-scale disruptions the facility inevitably will face.

Скрыть анонс
24.07.2018
19:57 WhatReallyHappened.comAI everywhere! New method allows full AI on basic laptops! supercomputers could QUADRUPLE neural networks!

Скрыть анонс
19.07.2018
14:49 Phys.orgWorld-first program to stop hacking by supercomputers

IT experts at Monash University have devised the world's leading post-quantum secure privacy-preserving algorithm – so powerful it can thwart attacks from supercomputers of the future.

Скрыть анонс
13.07.2018
16:20 Phys.orgHow to fit a planet inside a computer—developing the energy exascale earth system model

The Earth was apparently losing water.

Скрыть анонс
10:15 Nanowerk.comHow to fit a planet inside a computer: developing the Energy Exascale Earth system model

Researchers have developed a new simulation to help us learn more about Earth's present and future.

Скрыть анонс
06:55 Arxiv.org CSVirtualizing the Stampede2 Supercomputer with Applications to HPC in the Cloud. (arXiv:1807.04616v1 [cs.DC])

Methods developed at the Texas Advanced Computing Center (TACC) are described and demonstrated for automating the construction of an elastic, virtual cluster emulating the Stampede2 high performance computing (HPC) system. The cluster can be built and/or scaled in a matter of minutes on the Jetstream self-service cloud system and shares many properties of the original Stampede2, including: i) common identity management, ii) access to the same file systems, iii) equivalent software application stack and module system, iv) similar job scheduling interface via Slurm.
We measure time-to-solution for a number of common scientific applications on our virtual cluster against equivalent runs on Stampede2 and develop an application profile where performance is similar or otherwise acceptable. For such applications, the virtual cluster provides an effective form of "cloud bursting" with the

Скрыть анонс
11.07.2018
13:41 Technology.orgPrinceton Research Computing introduces newest TIGER supercomputer

At close to six times the power of its predecessor, TIGER is funded by the University provost, the Princeton

Скрыть анонс
05:18 Arxiv.org CSThe SAGE Project: a Storage Centric Approach for Exascale Computing. (arXiv:1807.03632v1 [cs.DC])

SAGE (Percipient StorAGe for Exascale Data Centric Computing) is a European Commission funded project towards the era of Exascale computing. Its goal is to design and implement a Big Data/Extreme Computing (BDEC) capable infrastructure with associated software stack. The SAGE system follows a "storage centric" approach as it is capable of storing and processing large data volumes at the Exascale regime.
SAGE addresses the convergence of Big Data Analysis and HPC in an era of next-generation data centric computing. This convergence is driven by the proliferation of massive data sources, such as large, dispersed scientific instruments and sensors where data needs to be processed, analyzed and integrated into simulations to derive scientific and innovative insights. A first prototype of the SAGE system has been been implemented and installed at the Julich Supercomputing Center. The SAGE

Скрыть анонс
09.07.2018
20:44 Zdnet.comHPE supercomputer used to advance modeling of the mammalian brain

A Swiss research initiative called the Blue Brain Project is using a supercomputer based on the HPE SGI 8600 System to model regions of the mouse brain.

Скрыть анонс
02.07.2018
09:27 Technology.orgSupercomputers Help Design Mutant Enzyme that Eats Plastic Bottles

A dump truck’s worth of plastic empties into the ocean every minute. Worldwide, humankind produces over 300 million

Скрыть анонс
29.06.2018
19:46 ExtremeTech.comJapan Tests Silicon for Exascale Computing in 2021

Fujitsu and Riken are prepping a new leap ahead for exascale computing in Japan -- and the system should be ready by 2021.
The post Japan Tests Silicon for Exascale Computing in 2021 appeared first on ExtremeTech.

Скрыть анонс
28.06.2018
11:09 Phys.orgSupercomputers help design mutant enzyme that eats plastic bottles

A dump truck's worth of plastic empties into the ocean every minute. Worldwide, humankind produces over 300 million tons of plastic each year, much of which is predicted to last centuries to millennia and pollutes both aquatic and terrestrial environments. PET plastic, short for polyethylene terephthalate, is the fourth most-produced plastic and is used to make things like beverage bottles and carpets, the latter essentially not being recycled. Some scientists are hoping to change that, using supercomputers to engineer an enzyme that breaks down PET. They say it's a step on a long road toward recycling PET and other plastics into commercially valuable materials at industrial scale.

Скрыть анонс
05:18 NewYork TimesChina Extends Lead as Most Prolific Supercomputer Maker

China’s corporations and its government made 206 of the world’s 500 fastest machines, moving further ahead of the number made in the United States.

Скрыть анонс
05:17 International Herald TribuneChina Extends Lead as Most Prolific Supercomputer Maker

China’s corporations and its government made 206 of the world’s 500 fastest machines, moving further ahead of the number made in the United States.

Скрыть анонс
27.06.2018
16:24 Technology.orgSupercomputers help design mutant enzyme that eats plastic bottle

A dump truck’s worth of plastic empties into the ocean every minute. Worldwide, humankind produces over 300 million

Скрыть анонс
16:23 Phys.orgScientist mines supercomputer simulations of protein dynamics for biological energy-conversion principles

An energy crisis can trigger years of fuel shortages and high gas prices. Energy shortages in biological cells are even more serious. Consequences include amyotrophic lateral sclerosis and aging-related disorders such as Alzheimer's and Parkinson's diseases.

Скрыть анонс
00:47 Zdnet.com​Supercomputers: All Linux, all the time

The latest TOP500 Supercomputer list is out. What's not surprising is that Linux runs on every last one of the world's fastest supercomputers. What is surprising is that GPUs, not CPUs, now power most of supercomputers' speed.

Скрыть анонс
26.06.2018
13:15 Technology.orgLLNL’s Sierra is third fastest supercomputer

Lawrence Livermore National Laboratory’s (LLNL) next-generation supercomputer, Sierra, is the third-fastest computing system in the world, according to

Скрыть анонс
25.06.2018
18:20 TechnologyReview.comChina tops the US as the number one supercomputer manufacturer

Скрыть анонс
17:03 Zdnet.comUS supercomputer knocks out Chinese champ to reclaim HPC crown

But China has more machines overall in the TOP500 list.

Скрыть анонс
14:36 Phys.orgEngineers turn to Argonne's Mira supercomputer to study supersonic turbulence

Aviation's frontier is supersonic. The military is seeking ever-faster aircraft, planes that can fly five times the speed of sound. Fifteen years after the Concorde's last transatlantic flight, Japan Airlines and the Virgin Group are investing in jets that could slash overseas travel time by more than half.

Скрыть анонс
14:36 CNBC technologyChina extends its lead as the most prolific make of supercomputers

The new list of the 500 swiftest machines underlines how much faster China is supercomputers.

Скрыть анонс
14:25 CNBC top newsChina extends its lead as the most prolific make of supercomputers

The new list of the 500 swiftest machines underlines how much faster China is supercomputers.

Скрыть анонс
13:52 ScienceNewsDaily.orgWith IBM Summit supercomputer, US reclaims top spot from China in high-powered computing - CNET

But China has even more systems on the latest 500-supercomputer list.

Скрыть анонс
10:47 NYT TechnologyChina Extends Lead as Most Prolific Supercomputer Maker

China’s corporations and its government made 206 of the world’s 500 fastest machines, moving further ahead of the number made in the United States.

Скрыть анонс
24.06.2018
21:31 CNBC top newsTwo women teamed up to build IBM's new supercomputer — and they've been a powerhouse pair for years

In more ways than one, the two women complete each other.

Скрыть анонс
20.06.2018
21:35 CNBC technologyThe United States just built the world's fastest supercomputer -- here's what that means

The United States now has the world's fastest supercomputer. The machine, called Summit, was built for Oak Ridge National Laboratory in partnership with IBM and NVidia, and is designed for AI applications.

Скрыть анонс
21:35 CNBC top newsThe United States just built the world's fastest supercomputer -- here's what that means

The United States now has the world's fastest supercomputer. The machine, called Summit, was built for Oak Ridge National Laboratory in partnership with IBM and NVidia, and is designed for AI applications.

Скрыть анонс
19.06.2018
19:29 ExtremeTech.comWorld’s Largest ARM-Based Supercomputer Launched, as Exascale Heats Up

HPE is launching the largest supercomputer ever built using ARM CPUs as a testbed for nuclear physics processing at Sandia National Laboratories.
The post World’s Largest ARM-Based Supercomputer Launched, as Exascale Heats Up appeared first on ExtremeTech.

Скрыть анонс
18.06.2018
14:52 ScienceNewsDaily.orgWorld's largest ARM supercomputer is headed to a nuclear security lab

Most supercomputers are focused on pure processing speed. Take the DOE's new Summit system, which is now the world's most powerful supercomputer, with 9,000 22-core IBM Power9 ...

Скрыть анонс
14:08 Zdnet.comHPE announces world's largest Arm-based supercomputer

Astra will deliver over 2.3 peak petaflops of performance, which should put it well within the top 100 supercomputers ever built.

Скрыть анонс
15.06.2018
00:38 LiveScience.comThis Supercomputer Can Calculate in 1 Second What Would Take You 6 Billion Years

It's the fastest, and smartest, supercomputer.

Скрыть анонс
14.06.2018
18:05 WhatReallyHappened.comLargest, Fastest Supercomputer in the World Made Just For Artificial Intelligence

Скрыть анонс
13.06.2018
19:58 Zdnet.com​How Red Hat Linux is helping reclaim the fastest supercomputer title for the US

It's not just the chips in the Department of Energy's record-breaking Summit supercomputer, which is setting new speed records; it's also Red Hat Enterprise Linux.

Скрыть анонс
01:31 Popular ScienceMeet the new fastest supercomputer in the world

Technology It's more powerful than a million high-end laptops.
If you wanted to put Summit on your desk, you’d need a workstation that’s about the size of two tennis courts.

Скрыть анонс
12.06.2018
18:21 WhatReallyHappened.comPetaflops: US Regains World’s Fastest Supercomputer Title as ‘Summit’ Fires Up

After switching on the 340-ton Summit, the US has clawed back its title to producing the world’s fastest supercomputer, an honorific held by China for the past five years.
Although the brag has come to be acknowledged as merely symbolic, the location of the world's fastest supercomputer has long been a topic of national pride among a certain kind of politically-motivated tech nerd, as well as a jumping-off point for dark predictions of machine hegemony through the use or abuse of artificial intelligence and surveillance-based data gathering.

Скрыть анонс
11:06 Technology.orgSupercomputers provide new insight into the life and death of a neutron

Experiments that measure the lifetime of neutrons reveal a perplexing and unresolved discrepancy. While this lifetime has been

Скрыть анонс
10.06.2018
12:05 Technology.orgORNL Launches Summit Supercomputer

The U.S. Department of Energy’s Oak Ridge National Laboratory today unveiled Summit as the world’s most powerful and

Скрыть анонс
09.06.2018
13:45 Gizmag World's most powerful supercomputer handles staggering 200,000 trillion calculations per second


The United States reclaims the title of having the world's most powerful supercomputer as the US Department of Energy's Oak Ridge National Laboratory (ORNL) unveils a machine capable of handling 200,000 trillion calculations per second (200 petaflops). In a ceremony on Friday, June 8, Secretary of Energy Rick Perry introduced Summit, eight times more powerful than ORNL's previous supercomputer, Titan, which came online in 2012 with a capacity of 27 petaflops.
.. Continue Reading World's most powerful supercomputer handles staggering 200,000 trillion calculations per second Category: Computers Tags: ORNL Supercomputer Related Articles: Obama wants the US to be at the forefront of supercomputer technology Supercomputing gets more affordable with a little help from Pi Latest rankings show US and China tied on supercomputer count Homegrown Chinese supercomputer claims world number one

Скрыть анонс
00:03 ExtremeTech.comIBM, Department of Energy Unveil World’s Fastest Supercomputer

The DoE has unveiled Summit, a new supercomputer from IBM housed at the US Department of Energy’s Oak Ridge National Laboratory (ORNL). It's an order of magnitude more powerful than current US supercomputers and fast enough to beat every other system in the world.
The post IBM, Department of Energy Unveil World’s Fastest Supercomputer appeared first on ExtremeTech.

Скрыть анонс
08.06.2018
23:31 Phys.orgORNL launches Summit Supercomputer—America's new top supercomputer for science

The U.S. Department of Energy's Oak Ridge National Laboratory today unveiled Summit as the world's most powerful and smartest scientific supercomputer.

Скрыть анонс
23:31 CNBC top newsIBM CEO: World's fastest, smartest supercomputer one of our greatest achievements

The Department of Energy partnered with IBM and Nvidia to deliver the world's fastest supercomputer, called Summit.

Скрыть анонс
23:20 CNBC technologyIBM CEO: World's fastest, smartest supercomputer one of our greatest achievements

The Department of Energy partnered with IBM and Nvidia to deliver the world's fastest supercomputer, called Summit.

Скрыть анонс
22:38 Zdnet.comUS once again boasts the world's fastest supercomputer

The US Department of Energy on Friday unveiled Summit, which can perform 200 quadrillion calculations per second.

Скрыть анонс
19:26 ScienceNewsDaily.orgAmerica’s new supercomputer beats China’s fastest machine to take the world’s most powerful title

Скрыть анонс
19:14 NewYork TimesMove Over, China: U.S. Has the World’s Fastest Supercomputer Again

For the past five years, China has had the world’s speediest computer. But as of Friday, Summit, a machine built in the United States, is taking the lead.

Скрыть анонс
19:14 Financial TimesIBM builds world’s most powerful supercomputer

Summit machine boasts 200 petaflops and was designed with big data in mind

Скрыть анонс
19:13 International Herald TribuneMove Over, China: U.S. Has the World’s Fastest Supercomputer Again

For the past five years, China has had the world’s speediest computer. But as of Friday, Summit, a machine built in the United States, is taking the lead.

Скрыть анонс
19:05 TechnologyReview.comThe world’s most powerful supercomputer is tailor made for the AI era

The technology used to build America’s new Summit machine will also help us make the leap to exascale computing.

Скрыть анонс
19:05 TechnologyReview.comAmerica’s new supercomputer beats China’s fastest machine to take the world’s most powerful title

Скрыть анонс
06.06.2018
18:10 Zdnet.com​University of Sydney unveils AU$2.3m Artemis 3 AI research supercomputer

Powered by Dell EMC, the upgrade to Artemis is expected to allow further research into artificial intelligence.

Скрыть анонс
First← Previous12345Previous →Last