Techh.info/techtechnology hourly

Supercomputers

headlines texts
18.05.2019
00:21 ScienceNewsDaily.orgHP Enterprise buying supercomputer star Cray

Hewlett Packard Enterprise (HPE) on Friday announced a $1.3 billion deal to buy supercomputer maker Cray, part of a move to expand into data analysis from connected devices .

Скрыть анонс
17.05.2019
23:31 NYT TechnologyHP Enterprise to Acquire Supercomputer Pioneer Cray

The price was relatively small, but the deal may have a big impact on the race between the United States and China to build more powerful computers.

Скрыть анонс
22:44 ExtremeTech.comHP Enterprise Buys Supercomputer Pioneer Cray for $1.3B

HPE has bought Cray, the OG supercomputer manufacturer, for $1.3B. The deal is expected to boost HPE's own HPC business segment.
The post HP Enterprise Buys Supercomputer Pioneer Cray for $1.3B appeared first on ExtremeTech.

Скрыть анонс
20:43 Phys.orgHP Enterprise buying supercomputer star Cray

Hewlett Packard Enterprise (HPE) on Friday announced a $1.3 billion deal to buy supercomputer maker Cray, part of a move to expand into data analysis from connected devices .

Скрыть анонс
17:39 Zdnet.comHPE buys supercomputer company Cray for $1.3 billion

At the core of the deal is Cray's high-performance computing (HPC) technology, which HPE wants to offer as a future HPC-as-a-Service platform.

Скрыть анонс
16:49 Zdnet.comIs HPE about to buy supercomputer maker Cray?

HPE could soon acquire its way to the world of exascale supercomputing.

Скрыть анонс
15:20 ReutersHewlett Packard Enterprise to buy supercomputer maker Cray in $1.30 billion deal

Supercomputer manufacturer Cray Inc said on Friday it would be bought by Hewlett Packard Enterprise Co in a deal valued at about $1.30 billion.

Скрыть анонс
15:07 ReutersHewlett Packard Enterprise to buy supercomputer maker Cray in $1.30 billion deal

Supercomputer manufacturer Cray Inc said on Friday it would be bought by Hewlett Packard Enterprise Co in a deal valued at about $1.30 billion.

Скрыть анонс
14:45 Zdnet.comIs HPE about to buy supercomputer maker Cray?

HPE could soon acquire its way to the world of exascale supercomputing.

Скрыть анонс
13.05.2019
20:56 Technology.orgSupercomputer Simulations Show Black Holes and Their Magnetic Bubbles

When the Event Horizon Telescope team released the first picture ever taken of a black hole in mid-April, the general

Скрыть анонс
19:18 LiveScience.comNew Supercomputer Will Span Continents, Outrace World's Fastest

Paired processors on two continents will power a new computer "brain."

Скрыть анонс
08:47 Zdnet.comSquare Kilometre Array supercomputer design completed

Design work on the 'brain of the SKA', one of two supercomputers, has been completed.

Скрыть анонс
10.05.2019
06:56 Arxiv.org PhysicsSPH-EXA: Enhancing the Scalability of SPH codes Via an Exascale-Ready SPH Mini-App. (arXiv:1905.03344v1 [physics.comp-ph])

Numerical simulations of fluids in astrophysics and computational fluid dynamics (CFD) are among the most computationally-demanding calculations, in terms of sustained floating-point operations per second, or FLOP/s. It is expected that these numerical simulations will significantly benefit from the future Exascale computing infrastructures, that will perform 10^18 FLOP/s. The performance of the SPH codes is, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. In this work an extensive study of three SPH implementations SPHYNX, ChaNGa, and XXX is performed, to gain insights and to expose any limitations and characteristics of the codes. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. We implemented a rotating square

Скрыть анонс
06:09 Arxiv.org CSSPH-EXA: Enhancing the Scalability of SPH codes Via an Exascale-Ready SPH Mini-App. (arXiv:1905.03344v1 [physics.comp-ph])

Numerical simulations of fluids in astrophysics and computational fluid dynamics (CFD) are among the most computationally-demanding calculations, in terms of sustained floating-point operations per second, or FLOP/s. It is expected that these numerical simulations will significantly benefit from the future Exascale computing infrastructures, that will perform 10^18 FLOP/s. The performance of the SPH codes is, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. In this work an extensive study of three SPH implementations SPHYNX, ChaNGa, and XXX is performed, to gain insights and to expose any limitations and characteristics of the codes. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. We implemented a rotating square

Скрыть анонс
09.05.2019
09:21 GizmagWorld s fastest supercomputer will heat up the race to the exascale era


Supercomputers are due to take a huge leap forward when the "exascale" era kicks off in 2021 with the launch of Aurora. But now it looks like that world-leading machine will be usurped before it's even set up. The Frontier system has just been announced, which will boast the power of over 1.5 exaflops.
.. Continue Reading World's fastest supercomputer will heat up the race to the exascale era Category: Computers Tags: AMD Computer Computers Cray Oak Ridge National Laboratory Supercomputer US Department of Energy

Скрыть анонс
08.05.2019
18:14 Phys.orgAMD's tech to power new supercomputer for Department of Energy

Advanced Micro Devices announced Tuesday that its technology will help power a new supercomputer at Tennessee-based Oak Ridge National Laboratory in 2021.

Скрыть анонс
07.05.2019
17:25 Zdnet.comAMD, Cray to build 1.5 exaFlop Frontier supercomputer for Oak Ridge National Lab

Frontier is set to become the world's fastest supercomputer when it arrives at the lab in 2021.

Скрыть анонс
14:27 ScienceNewsDaily.orgAMD and Cray are building the 'world's most powerful supercomputer'

The US may be set to hang onto the crown of having the world's most powerful supercomputer for some time. Cray Computing and AMD are building an exascale machine with the Oak Ridge ...

Скрыть анонс
03.05.2019
20:39 WhatReallyHappened.com“THE HAMMER” — Ultra-secret Supercomputer System Used by CIA/NSA to ‘Wiretap’ Trump

President Obama’s Director of National Intelligence (DNI) James Clapper and his Central Intelligence Agency (CIA) director John Brennan oversaw a secret supercomputer system known as “THE HAMMER,” according to former NSA/CIA contractor-turned whistleblower Dennis Montgomery.
Clapper and Brennan were using the supercomputer system to conduct illegal and unconstitutional government data harvesting and wiretapping. THE HAMMER was installed on federal property in Fort Washington, Maryland at a complex which some speculate is a secret CIA and NSA operation operating at a US Naval facility.
President Trump’s allegation that the Obama Administration was wiretapping him is not only supported by Montgomery’s whistleblower revelations about Brennan’s and Clapper’s computer system THE HAMMER, but also by statements made this week by William Binney, a former NSA Technical Director of the World

Скрыть анонс
01.05.2019
00:23 ScienceDaily.comNovel software to balance data processing load in supercomputers to be presented

The modern-age adage "work smarter, not harder" stresses the importance of not only working to produce, but also making efficient use of resources.

Скрыть анонс
29.04.2019
07:13 Arxiv.org CSA Benchmarking Study to Evaluate Apache Spark on Large-Scale Supercomputers. (arXiv:1904.11812v1 [cs.DC])

As dataset sizes increase, data analysis tasks in high performance computing (HPC) are increasingly dependent on sophisticated dataflows and out-of-core methods for efficient system utilization. In addition, as HPC systems grow, memory access and data sharing are becoming performance bottlenecks. Cloud computing employs a data processing paradigm typically built on a loosely connected group of low-cost computing nodes without relying upon shared storage and/or memory. Apache Spark is a popular engine for large-scale data analysis in the cloud, which we have successfully deployed via job submission scripts on production clusters.
In this paper, we describe common parallel analysis dataflows for both Message Passing Interface (MPI) and cloud based applications. We developed an effective benchmark to measure the performance characteristics of these tasks using both types of systems,

Скрыть анонс
07:13 Arxiv.org CSShall numerical astrophysics step into the era of Exascale computing?. (arXiv:1904.11720v1 [astro-ph.IM])

High performance computing numerical simulations are today one of the more effective instruments to implement and study new theoretical models, and they are mandatory during the preparatory phase and operational phase of any scientific experiment. New challenges in Cosmology and Astrophysics will require a large number of new extremely computationally intensive simulations to investigate physical processes at different scales. Moreover, the size and complexity of the new generation of observational facilities also implies a new generation of high performance data reduction and analysis tools pushing toward the use of Exascale computing capabilities. Exascale supercomputers cannot be produced today. We discuss the major technological challenges in the design, development and use of such computing capabilities and we will report on the progresses that has been made in the last years in Europe,

Скрыть анонс
23.04.2019
10:22 Arxiv.org PhysicsStatus and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond. (arXiv:1904.09725v1 [hep-lat])

In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.

Скрыть анонс
10.04.2019
03:16 Zdnet.comSouth Australia to house Defence's new AU$68m supercomputer centre

Construction of the Defence High Performance Computing Centre will begin later this year.

Скрыть анонс
04.04.2019
10:45 Arxiv.org StatisticsDeep Reinforcement Learning on a Budget: 3D Control and Reasoning Without a Supercomputer. (arXiv:1904.01806v1 [cs.LG])

An important goal of research in Deep Reinforcement Learning in mobile robotics is to train agents capable of solving complex tasks, which require a high level of scene understanding and reasoning from an egocentric perspective. When trained from simulations, optimal environments should satisfy a currently unobtainable combination of high-fidelity photographic observations, massive amounts of different environment configurations and fast simulation speeds. In this paper we argue that research on training agents capable of complex reasoning can be simplified by decoupling from the requirement of high fidelity photographic observations. We present a suite of tasks requiring complex reasoning and exploration in continuous, partially observable 3D environments. The objective is to provide challenging scenarios and a robust baseline agent architecture that can be trained on mid-range consumer

Скрыть анонс
10:45 Arxiv.org CSDeep Reinforcement Learning on a Budget: 3D Control and Reasoning Without a Supercomputer. (arXiv:1904.01806v1 [cs.LG])

An important goal of research in Deep Reinforcement Learning in mobile robotics is to train agents capable of solving complex tasks, which require a high level of scene understanding and reasoning from an egocentric perspective. When trained from simulations, optimal environments should satisfy a currently unobtainable combination of high-fidelity photographic observations, massive amounts of different environment configurations and fast simulation speeds. In this paper we argue that research on training agents capable of complex reasoning can be simplified by decoupling from the requirement of high fidelity photographic observations. We present a suite of tasks requiring complex reasoning and exploration in continuous, partially observable 3D environments. The objective is to provide challenging scenarios and a robust baseline agent architecture that can be trained on mid-range consumer

Скрыть анонс
01.04.2019
21:34 Technology.orgSupercomputers Help Supercharge Protein Assembly

Red blood cells are amazing. They pick up oxygen from our lungs and carry it all over our

Скрыть анонс
06:54 ScienceDaily.comScientists develop way to perform supercomputer simulations of the heart on cellphones

You can now perform supercomputer simulations of the heart's electrophysiology in real time on desktop computers and even cellphones. A team of scientists developed a new approach that can not only help diagnose heart conditions and test new treatments, but pushes the boundaries of cardiac science by opening up a floodgate of new cardiac research and education.

Скрыть анонс
30.03.2019
02:03 ScienceDaily.comSupercomputers help supercharge protein assembly

Using proteins derived from jellyfish, scientists assembled a complex sixteen protein structure composed of two stacked octamers by supercharging alone. This research could be applied to useful technologies such as pharmaceutical targeting, artificial energy harvesting, 'smart' sensing and building materials, and more. Computational modeling through XSEDE allocations on Stampede2 (TACC) and Comet (SDSC) refined measurements of structure.

Скрыть анонс
29.03.2019
23:52 Phys.orgSupercomputers help supercharge protein assembly

Red blood cells are amazing. They pick up oxygen from our lungs and carry it all over our body to keep us alive. The hemoglobin molecule in red blood cells transports oxygen by changing its shape in an all-or-nothing fashion. Four copies of the same protein in hemoglobin open and close like flower petals, structurally coupled to respond to each other. Using supercomputers, scientists are just starting to design proteins that self-assemble to combine and resemble life-giving molecules like hemoglobin. The scientists say their methods could be applied to useful technologies such as pharmaceutical targeting, artificial energy harvesting, 'smart' sensing and building materials, and more.

Скрыть анонс
26.03.2019
20:01 LiveScience.comSupercomputers Solve a Mystery Hidden Inside Merging Water Droplets

Weird things happen when water droplets smash into each other.

Скрыть анонс
19:02 Technology.orgSupercomputer Simulations Help Combat Tuberculosis (TB) Granulomas

The greatest cause of death due to infection globally is tuberculosis (TB). Two supercomputers – Comet at the San Diego

Скрыть анонс
15:05 Zdnet.comEurope's big weather supercomputer data center is about to leave UK

The European Centre for Medium-Range Weather Forecasts is setting up its HPC data center in Bologna, Italy.

Скрыть анонс
25.03.2019
17:06 SingularityHub.ComIntel Is Building the World’s Most Powerful Supercomputer

A supercomputer capable of a quintillion operations a second will go online in 2021 after the US government handed Intel and supercomputer manufacturer Cray a contract to build an exascale computer called Aurora. This machine is being built from the bottom up to run AI at unprecedented scales. Today’s most powerful supercomputers measure their performance […]

Скрыть анонс
20.03.2019
18:27 ScienceDaily.comSupercomputer simulations shed light on how liquid drops combine

High performance computing has revealed in detail how liquid droplets combine, in a development with applications such as improving 3D printing technologies or the forecasting of thunderstorms.

Скрыть анонс
16:44 Phys.orgSupercomputer sheds light on how droplets merge

Scientists have revealed the precise molecular mechanisms that cause drops of liquid to combine, in a discovery that could have a range of applications.

Скрыть анонс
15:08 Phys.orgSupercomputers to help supercharge ceramic matrix composite manufacturing

New software capabilities developed by computational scientists at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) and the Rolls Royce Corporation could soon help engineers lift the gas turbine engines of aircraft and power plants to higher efficiencies.

Скрыть анонс
19.03.2019
22:41 WhatReallyHappened.comUS government teams up with Intel and Cray on $500 million plan to build Project Aurora supercomputer capable of completing 1 quadrillion calculations PER SECOND

A U.S. government-led group is working with chipmaker Intel Corp and Cray Inc to develop and build the nation's fastest computer by 2021 for conducting nuclear weapons and other research, officials said on Monday.
The Department of Energy and the Argonne National Laboratory near Chicago said they are working on a supercomputer dubbed Aurora with Intel, the world's biggest supplier of data center chips, and Cray, which specializes in the ultra-fast machines.

Скрыть анонс
16:57 Telegraph.co.ukUS to create world's most powerful supercomputer capable of 1 quintillion calculations per second

www.telegraph.co.uk for the latest news from the UK and around the world.

Скрыть анонс
16:02 TechnologyReview.comThe US is building a $500m ‘exascale’ computer that will be the world’s most powerful

Скрыть анонс
15:14 Phys.orgNew Argonne supercomputer, built for next-gen AI, will be most powerful in U.S.

The most powerful computer ever built in the United States will make its home at Argonne National Laboratory in 2021, the U.S. Department of Energy and Intel announced today. Aurora, the United States' first exascale computer, will combine unprecedented processing power with the growing potential of artificial intelligence to help solve the world's most important and complex scientific challenges.

Скрыть анонс
14:52 ExtremeTech.comIntel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’

Intel and the DOE have announced the first exascale computer expected to be deployed. Codenamed Aurora, the system should be ready by 2021.
The post Intel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’ appeared first on ExtremeTech.

Скрыть анонс
12:04 NewYork TimesRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
12:03 International Herald TribuneRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
10:34 Technology.orgU.S. Department of Energy and Intel to deliver first exascale supercomputer

Targeted for 2021 delivery, the Argonne National Laboratory supercomputer will enable high-performance computing and artificial intelligence at exascale.

Скрыть анонс
10:16 Ixbt.com Неанонсированные ускорители Intel Xe лягут в основу Aurora — первого суперкомпьютера класса Exascale

Компания Intel на своём сайте опубликовала пресс-релиз, в котором рассказала о том, что совместно с Министерством энергетики США готовится в 2021 представить первый суперкомпьютер класса Exascale, то есть с производительностью свыше 1 exaFLOPS.
Суперкомпьютер получил имя Aurora и разместится в Аргоннской национальной лаборатории. Контракт в целом оценивается в 500 млн долларов.
Но самое интересное — основа суперкомпьютера. Aurora будет включать «новые технологии Intel, разработанные специально для конвергенции искусственного интеллекта и высокопроизводительных вычислений в экстремальных масштабах». К ним относятся в том числе решения на вычислительной архитектуре Intel Xe. Несмотря на то, что в итоге нам рассказали, что Intel Xe — это не бренд, а название процесса перехода компании от энергоэффективной архитектуры GPU к масштабируемой, в данном случае процессорный

Скрыть анонс
05:22 Gizmag Intel's next-gen supercomputer to usher in exascale era in 2021


The next generation of supercomputers has an official start date. Intel and the US Department of Energy (DOE) are teaming up to deliver the world's first exascale supercomputer in 2021, giving a huge boost to many different fields of research. Named Aurora, the new system will be a thousand times more powerful than the petascale generation that began in 2008 and is still in wide use today.
.. Continue Reading Intel's next-gen supercomputer to usher in exascale era in 2021 Category: Computers Tags: Computers Cray Data Deep Learning Intel Supercomputer US Department of Energy

Скрыть анонс
18.03.2019
23:37 Zdnet.comU.S. Department of Energy plans exaFlop supercomputer in 2021

The effort will leverage Cray's Shasta supercomputing platform as well as Intel technology.

Скрыть анонс
22:51 ScienceNewsDaily.orgAmerica’s first exascale supercomputer to be built by 2021

Details of America’s next-generation supercomputer were revealed at a ceremony attended by Secretary of Energy Rick Perry and Senator Dick Durbin at Argonne National Laboratory ...

Скрыть анонс
22:16 NYT TechnologyRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
15.03.2019
15:45 Phys.orgHandling trillions of supercomputer files just got simpler

A new distributed file system for high-performance computing available today via the software collaboration site GitHub provides unprecedented performance for creating, updating and managing extreme numbers of files.

Скрыть анонс
05.03.2019
15:03 LiveScience.comPhysicists Used Supercomputers to Map the Bone-Crushing Pressures Hiding Inside Protons

If you shrank yourself down and entered a proton, you'd experience among the most intense pressures found anywhere in the universe.

Скрыть анонс
21.02.2019
15:02 Technology.orgDTU boasts top-performing supercomputers

Over a five-year period, DTU will invest close to EUR 9.4 million (DKK 70 million) in upgrading and

Скрыть анонс
11:41 Arxiv.org CS'Zhores' -- Petaflops supercomputer for data-driven modeling, machine learning and artificial intelligence installed in Skolkovo Institute of Science and Technology. (arXiv:1902.07490v1 [cs.DC])

The Petaflops supercomputer "Zhores" recently launched in the "Center for Computational and Data-Intensive Science and Engineering" (CDISE) of Skolkovo Institute of Science and Technology (Skoltech) opens up new exciting opportunities for scientific discoveries in the institute especially in the areas of data-driven modeling, machine learning and artificial intelligence. This supercomputer utilizes the latest generation of Intel and NVidia processors to provide resources for the most compute intensive tasks of the Skoltech scientists working in digital pharma, predictive analytics, photonics, material science, image processing, plasma physics and many more. Currently it places 6th in the Russian and CIS TOP-50 (2018) supercomputer list. In this article we summarize the cluster properties and discuss the measured performance and usage modes of this scientific instrument in

Скрыть анонс
19.02.2019
09:35 Arxiv.org CSENBB Processor: Towards the ExaScale Numerical Brain Box [Position Paper]. (arXiv:1902.06655v1 [cs.AR])

ExaScale systems will be a key driver for simulations that are essential for advance of science and economic growth. We aim to present a new concept of microprocessor for floating-point computations useful for being a basic building block of ExaScale systems and beyond. The proposed microprocessor architecture has a frontend for programming interface based on the concept of event-driven simulation. The user program is executed as an event-driven simulation using a hardware/software co-designed simulator. This is the flexible part of the system. The back-end exploits the concept of uniform topology as in a brain: a massive packet switched interconnection network with flit credit-based flow control with virtual channels that incorporates seamlessly communication, arithmetic and storage. Floating-point computations are incorporated as on-line arithmetic operators in the output ports of the

Скрыть анонс
24.01.2019
22:52 Phys.orgPhysicists use supercomputers and AI to create the most accurate model yet of black hole mergers

One of the most cataclysmic events to occur in the cosmos involves the collision of two black holes. Formed from the deathly collapse of massive stars, black holes are incredibly compact—a person standing near a stellar-mass black hole would feel gravity about a trillion times more strongly than they would on Earth. When two objects of this extreme density spiral together and merge, a fairly common occurrence in space, they radiate more power than all the stars in the universe.

Скрыть анонс
02:49 WhatReallyHappened.comIRS Becoming Big Brother With $99-Million Supercomputer – will give the agency the “unprecedented ability to track the lives and transactions of tens of millions of American citizens”

Скрыть анонс
07.01.2019
15:09 AzoRobotics.comMaximum Computing Power and Flexibility with AI-Capable Supercomputer ZF ProAI

ZF launched the newest model of its automotive supercomputer ZF ProAI right before the start of the 2019 Consumer Electronics Show (CES). The ZF ProAI RoboThink central control unit offers the maximum...

Скрыть анонс
03.01.2019
18:36 WhatReallyHappened.comThis million-core supercomputer inspired by the human brain breaks all the rules

For all their fleshly failings, human brains are the model that computer engineers have always sought to emulate: huge processing power that's both surprisingly energy efficient, and available in a tiny form factor. But late last year, in an unprepossessing former metal works in Manchester, one machine became the closest thing to an artificial human brain there is.
The one-million core SpiNNaker -- short for Spiking Neural Network Architecture -- is the culmination of decades of work and millions of pounds of investment. The result: a massively parallel supercomputer designed to mimic the workings of the human brain, which it's hoped will give neuroscientists a new understanding of how the mind works and open up new avenues of medical research.

Скрыть анонс
15:52 Zdnet.comThis million-core supercomputer inspired by the human brain breaks all the rules

SpiNNaker's spiking neural network mimics the human brain, and could fuel breakthroughs in robotics and health.

Скрыть анонс
17.12.2018
19:05 Phys.orgTeam wins major supercomputer time to study the edge of fusion plasmas

The U.S. Department of Energy (DOE) has awarded major computer hours on three leading supercomputers, including the world's fastest, to a team led by C.S. Chang of the DOE's Princeton Plasma Physics Laboratory (PPPL). The team is addressing issues that must be resolved for successful operation of ITER, the international experiment under construction in France to demonstrate the feasibility of producing fusion energy—the power that drives the sun and stars—in a magnetically controlled fusion facility called a "tokamak."

Скрыть анонс
12.12.2018
15:11 Zdnet.comThe rise, fall, and rise of the supercomputer in the cloud era

Though the personal computer was born from garage projects, the supercomputer had been declining to the back of the garage. That's until a handful of trends conspired to poke the reset button for the industry. Now the race is back on.

Скрыть анонс
10.12.2018
14:44 Phys.orgSupercomputers without waste heat

Generally speaking, magnetism and the lossless flow of electrical current ("superconductivity") are competing phenomena that cannot coexist in the same sample. However, for building supercomputers, synergetically combining both states comes with major advantages as compared to today's semiconductor technology, characterized by high power consumption and heat production. Researchers from the Department of Physics at the University of Konstanz have now demonstrated that the lossless electrical transfer of magnetically encoded information is possible. This finding enables enhanced storage density on integrated circuit chips and significantly reduces the energy consumption of computing centres. The results of this study have been published in the current issue of the scientific journal Nature Communications.

Скрыть анонс
07.12.2018
22:48 ScienceDaily.comSupercomputers without waste heat

Physicists explore superconductivity for information processing.

Скрыть анонс
18:18 Nanowerk.comSupercomputers without waste heat

Physicists explore superconductivity for information processing.

Скрыть анонс
06.12.2018
17:16 Phys.orgLIGO supercomputer upgrade will speed up groundbreaking astrophysics research

In 2016, an international team of scientists found definitive evidence—tiny ripples in space known as gravitational waves—to support one of the last remaining untested predictions of Einstein's theory of general relativity. The team used the Laser Interferometer Gravitational-Wave Observatory (LIGO), which has since made several gravitational wave discoveries. Each discovery was possible in part because of a global network of supercomputer clusters, one of which is housed at Penn State. Researchers use this network, known as the LIGO Data Grid, to analyze the gravitational wave data.

Скрыть анонс
05.12.2018
18:17 Telegraph.co.ukUK supercomputer gives African farmers early warning of pests and blights  

Скрыть анонс
07:56 Arxiv.org PhysicsPushing Back the Limit of Ab-initio Quantum Transport Simulations on Hybrid Supercomputers. (arXiv:1812.01396v1 [physics.comp-ph])

The capabilities of CP2K, a density-functional theory package and OMEN, a nano-device simulator, are combined to study transport phenomena from first-principles in unprecedentedly large nanostructures. Based on the Hamiltonian and overlap matrices generated by CP2K for a given system, OMEN solves the Schroedinger equation with open boundary conditions (OBCs) for all possible electron momenta and energies. To accelerate this core operation a robust algorithm called SplitSolve has been developed. It allows to simultaneously treat the OBCs on CPUs and the Schroedinger equation on GPUs, taking advantage of hybrid nodes. Our key achievements on the Cray-XK7 Titan are (i) a reduction in time-to-solution by more than one order of magnitude as compared to standard methods, enabling the simulation of structures with more than 50000 atoms, (ii) a parallel efficiency of 97% when scaling from 756 up to

Скрыть анонс
01.12.2018
00:30 ScienceDaily.comA new way to see stress -- using supercomputers

Supercomputer simulations show that at the atomic level, material stress doesn't behave symmetrically. Widely-used atomic stress formulae significantly underestimate stress near stress concentrators such as dislocation core, crack tip, or interface, in a material under deformation. Supercomputers simulate force interactions of Lennard-Jones perfect single crystal of 240,000 atoms. Study findings could help scientists design new materials such as glass or metal that doesn't ice up.

Скрыть анонс
30.11.2018
20:42 Phys.orgA new way to see stress—using supercomputers

It's easy to take a lot for granted. Scientists do this when they study stress, the force per unit area on an object. Scientists handle stress mathematically by assuming it to have symmetry. That means the components of stress are identical if you transform the stressed object with something like a turn or a flip. Supercomputer simulations show that at the atomic level, material stress doesn't behave symmetrically. The findings could help scientists design new materials such as glass or metal that doesn't ice up.

Скрыть анонс
29.11.2018
09:42 Arxiv.org CSThe L-CSC cluster: Optimizing power efficiency to become the greenest supercomputer in the world in the Green500 list of November 2014. (arXiv:1811.11475v1 [cs.PF])

The L-CSC (Lattice Computer for Scientific Computing) is a general purpose compute cluster built with commodity hardware installed at GSI. Its main operational purpose is Lattice QCD (LQCD) calculations for physics simulations. Quantum Chromo Dynamics (QCD) is the physical theory describing the strong force, one of the four known fundamental interactions in the universe. L-CSC leverages a multi-GPU design accommodating the huge demand of LQCD for memory bandwidth. In recent years, heterogeneous clusters with accelerators such as GPUs have become more and more powerful while supercomputers in general have shown enormous increases in power consumption making electricity costs and cooling a significant factor in the total cost of ownership. Using mainly GPUs for processing, L-CSC is very power-efficient, and its architecture was optimized to provide the greatest possible power efficiency. This

Скрыть анонс
22.11.2018
10:33 Phys.orgMeet Michael, the supercomputer designed to accelerate UK research for EV batteries

A new supercomputer designed to speed up research on two of the UK's most important battery research projects has been installed at University College London (UCL). Named Michael, after the UK's most famous battery scientist, Michael Faraday, the supercomputer will reach 265 teraflops at peak performance.

Скрыть анонс
00:11 WhatReallyHappened.comMeet the new supercomputer behind the US nuclear arsenal

Скрыть анонс
20.11.2018
13:19 Arxiv.org StatisticsImage Classification at Supercomputer Scale. (arXiv:1811.06992v1 [cs.LG])

Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3% accuracy in 2.2 minutes on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images/second and no accuracy drop.

Скрыть анонс
12:46 Arxiv.org CSImage Classification at Supercomputer Scale. (arXiv:1811.06992v1 [cs.LG])

Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3% accuracy in 2.2 minutes on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images/second and no accuracy drop.

Скрыть анонс
19.11.2018
18:03 SingularityHub.ComThe SpiNNaker Supercomputer, Modeled After the Human Brain, Is Up and Running

We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design. The brain is the most complex machine in the known universe, […]

Скрыть анонс
14.11.2018
13:23 Euronews.NetNew weather supercomputer to be installed in Bologna

A next-generation supercomputer is set to be installed in Bologna, Italy. The new system could help predict the weather with more accuracy, giving people a better chance of preparing for high-impact events such as windstorms or floods.

Скрыть анонс
13.11.2018
10:35 Arxiv.org CSScalability Evaluation of Iterative Algorithms Used for Supercomputer Simulation of Physical processes. (arXiv:1811.04276v1 [cs.DC])

The paper is devoted to the development of a methodology for evaluating the scalability of compute-intensive iterative algorithms used in simulating complex physical processes on supercomputer systems. The proposed methodology is based on the BSF (Bulk Synchronous Farm) parallel computation model, which makes it possible to predict the upper scalability bound of an iterative algorithm in early phases of its design. The BSF model assumes the representation of the algorithm in the form of operations on lists using high-order functions. Two classes of representations are considered: BSF-M (Map BSF) and BSF-MR (Map-Reduce BSF). The proposed methodology is described by the example of the solution of the system of linear equations by the Jacobi method. For the Jacobi method, two iterative algorithms are constructed: Jacobi-M based on the BSF-M representation and Jacobi-MR based on the BSF-MR

Скрыть анонс
08:32 Technology.orgSierra reaches higher altitudes, takes No. 2 spot on list of world’s fastest supercomputers

Sierra, Lawrence Livermore National Laboratory’s (LLNL) newest supercomputer, rose to second place on the list of the world’s

Скрыть анонс
12.11.2018
20:53 ScienceNewsDaily.orgUS overtakes China in top supercomputer list

A new list of the world's most powerful machines puts the US in the top two spots.

Скрыть анонс
19:55 Zdnet.comUS now claims world's top two fastest supercomputers

According to the Top500 List, IBM-built supercomputers Summit and Sierra have dethroned China's Sunway TaihuLight in terms of performance power.

Скрыть анонс
07.11.2018
06:07 Arxiv.org StatisticsMesh-TensorFlow: Deep Learning for Supercomputers. (arXiv:1811.02084v1 [cs.LG])

Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to discover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to

Скрыть анонс
05:56 Arxiv.org CSDefining Big Data Analytics Benchmarks for Next Generation Supercomputers. (arXiv:1811.02287v1 [cs.PF])

The design and construction of high performance computing (HPC) systems relies on exhaustive performance analysis and benchmarking. Traditionally this activity has been geared exclusively towards simulation scientists, who, unsurprisingly, have been the primary customers of HPC for decades. However, there is a large and growing volume of data science work that requires these large scale resources, and as such the calls for inclusion and investments in data for HPC have been increasing. So when designing a next generation HPC platform, it is necessary to have HPC-amenable big data analytics benchmarks. In this paper, we propose a set of big data analytics benchmarks and sample codes designed for testing the capabilities of current and next generation supercomputers.

Скрыть анонс
05:56 Arxiv.org CSMesh-TensorFlow: Deep Learning for Supercomputers. (arXiv:1811.02084v1 [cs.LG])

Batch-splitting (data-parallelism) is the dominant distributed Deep Neural Network (DNN) training strategy, due to its universal applicability and its amenability to Single-Program-Multiple-Data (SPMD) programming. However, batch-splitting suffers from problems including the inability to train very large models (due to memory constraints), high latency, and inefficiency at small batch sizes. All of these can be solved by more general distribution strategies (model-parallelism). Unfortunately, efficient model-parallel algorithms tend to be complicated to discover, describe, and to implement, particularly on large clusters. We introduce Mesh-TensorFlow, a language for specifying a general class of distributed tensor computations. Where data-parallelism can be viewed as splitting tensors and operations along the "batch" dimension, in Mesh-TensorFlow, the user can specify any tensor-dimensions to

Скрыть анонс
06.11.2018
23:05 Gizmag Million-core neuromorphic supercomputer could simulate an entire mouse brain


After 12 years of work, researchers at the University of Manchester in England have completed construction of a "SpiNNaker" (Spiking Neural Network Architecture) supercomputer. It can simulate the internal workings of up to a billion neurons through a whopping one million processing units.
.. Continue Reading Million-core neuromorphic supercomputer could simulate an entire mouse brain Category: Computers Tags: Brain Neuroscience Supercomputer University of Manchester

Скрыть анонс
05.11.2018
21:37 ScientificAmerican.ComA New Supercomputer Is the World's Fastest Brain-Mimicking Machine

The computer has one million processors and 1,200 interconnected circuit boards -- Read more on ScientificAmerican.com

Скрыть анонс
15:20 LiveScience.comNew Supercomputer with 1 Million Processors Is World's Fastest Brain-Mimicking Machine

A supercomputer that "thinks" like a brain can simulate neural activity in real time.

Скрыть анонс
02.11.2018
22:54 ExtremeTech.comNASA Will Use ISS Supercomputer for Science Experiments

It was only there for a test run, but now the agency plans to use it for processing data and running experiments.
The post NASA Will Use ISS Supercomputer for Science Experiments appeared first on ExtremeTech.

Скрыть анонс
18:45 Telegraph.co.uk'Human brain' supercomputer switched on for the first time

www.telegraph.co.uk for the latest news from the UK and around the world.

Скрыть анонс
18:24 CNN HealthA brain-like supercomputer could help Siri understand your accent

Hey Siri, listen up. A multitasking supercomputer that attempts to mimic the human brain was switched on Friday -- and it could be used to help virtual assistants like Apple's Siri and Amazon's Alexa understand your accent.

Скрыть анонс
09:46 News-Medical.NetWorld's largest neuromorphic supercomputer being switched on for the first time

The world's largest neuromorphic supercomputer designed and built to work in the same way a human brain does has been fitted with its landmark one-millionth processor core and is being switched on for the first time.

Скрыть анонс
31.10.2018
15:07 ExtremeTech.comNvidia Tesla, AMD Epyc to Power New Berkeley Supercomputer

Nvidia and AMD are the big winners in a new supercomputer announcement that will put Epyc and Tesla silicon in Cray's latest Shasta system.
The post Nvidia Tesla, AMD Epyc to Power New Berkeley Supercomputer appeared first on ExtremeTech.

Скрыть анонс
30.10.2018
21:12 Zdnet.comUS Energy Dept. announces new Nvidia-powered supercomputer

The Perlmutter will more than triple the computational power currently available at the National Energy Research Scientific Computing (NERSC) Center.

Скрыть анонс
07:02 Arxiv.org CSFFT, FMM, and Multigrid on the Road to Exascale: performance challenges and opportunities. (arXiv:1810.11883v1 [cs.DC])

FFT, FMM, and multigrid methods are widely used fast and highly scalable solvers for elliptic PDEs. However, emerging large-scale computing systems are introducing challenges in comparison to current petascale computers. Recent efforts have identified several constraints in the design of exascale software that include massive concurrency, resilience management, exploiting the high performance of heterogeneous systems, energy efficiency, and utilizing the deeper and more complex memory hierarchy expected at exascale. In this paper, we perform a model-based comparison of the FFT, FMM, and multigrid methods in the context of these projected constraints. In addition we use performance models to offer predictions about the expected performance on upcoming exascale system configurations based on current technology trends.

Скрыть анонс
29.10.2018
09:02 Technology.orgLawrence Livermore unveils NNSA’s Sierra, world’s third fastest supercomputer

The Department of Energy’s National Nuclear Security Administration (NNSA), Lawrence Livermore National Laboratory (LLNL) and its industry partners

Скрыть анонс
24.10.2018
21:44 ScienceMag.orgThree Chinese teams join race to build the world’s fastest supercomputer

Exascale computers promise dramatic advances in climate modeling, genetics studies, and artificial intelligence

Скрыть анонс
23.10.2018
10:25 NewScientist.ComTiny supercomputers could be made from the skeleton inside your cells

Building a computer out of the skeletons that hold our cells together could make them smaller and far more energy efficient

Скрыть анонс
17.10.2018
15:36 ScienceDaily.comSupermassive black holes and supercomputers

The universe's deep past is beyond the reach of even the mighty Hubble Space Telescope. But a new review explains how creation of the first stars and galaxies is nevertheless being mapped in detail, with the aid of computer simulations and theoretical models -- and how a new generation of supercomputers and software is being built that will fill in the gaps.

Скрыть анонс
09:35 Nanowerk.comSupermassive black holes and supercomputers

Researchers reveal the story of the oldest stars and galaxies, compiled from 20 years of simulating the early universe.

Скрыть анонс
15.10.2018
16:35 Technology.orgSupercomputer predicts optical properties of complex hybrid materials

Materials scientists at Duke University computationally predicted the electrical and optical properties of semiconductors made from extended organic

Скрыть анонс
First← Previous123456Previous →Last