Techh.info/techtechnology hourly

Supercomputers

headlines texts
19.08.2019
05:58 Arxiv.org StatisticsMultitask and Transfer Learning for Autotuning Exascale Applications. (arXiv:1908.05792v1 [cs.LG])

Multitask learning and transfer learning have proven to be useful in the field of machine learning when additional knowledge is available to help a prediction task. We aim at deriving methods following these paradigms for use in autotuning, where the goal is to find the optimal performance parameters of an application treated as a black-box function. We show comparative results with state-of-the-art autotuning techniques. For instance, we observe an average $1.5x$ improvement of the application runtime compared to the OpenTuner and HpBandSter autotuners. We explain how our approaches can be more suitable than some state-of-the-art autotuners for the tuning of any application in general and of expensive exascale applications in particular.

Скрыть анонс
05:58 Arxiv.org CSMultitask and Transfer Learning for Autotuning Exascale Applications. (arXiv:1908.05792v1 [cs.LG])

Multitask learning and transfer learning have proven to be useful in the field of machine learning when additional knowledge is available to help a prediction task. We aim at deriving methods following these paradigms for use in autotuning, where the goal is to find the optimal performance parameters of an application treated as a black-box function. We show comparative results with state-of-the-art autotuning techniques. For instance, we observe an average $1.5x$ improvement of the application runtime compared to the OpenTuner and HpBandSter autotuners. We explain how our approaches can be more suitable than some state-of-the-art autotuners for the tuning of any application in general and of expensive exascale applications in particular.

Скрыть анонс
13.08.2019
16:36 Zdnet.comCray lands $600 million contract from DOE to build El Capitan exascale supercomputer

This is the third US exascale win for Cray, which was recently acquired by Hewlett Packard Enterprise​ for $1.3 billion.

Скрыть анонс
01.08.2019
18:43 Phys.orgIs your supercomputer stumped? There may be a quantum solution

Some math problems are so complicated that they can bog down even the world's most powerful supercomputers. But a wild new frontier in computing that applies the rules of the quantum realm offers a different approach.

Скрыть анонс
18:38 ScienceDaily.comIs your supercomputer stumped? There may be a quantum solution

A new study details how a quantum computing technique called 'quantum annealing' can be used to solve problems relevant to fundamental questions in nuclear physics about the subatomic building blocks of all matter. It could also help answer other vexing questions in science and industry, too.

Скрыть анонс
04:32 Arxiv.org CSDeploying a Top-100 Supercomputer for Large Parallel Workloads: the Niagara Supercomputer. (arXiv:1907.13600v1 [cs.DC])

Niagara is currently the fastest supercomputer accessible to academics in Canada. It was deployed at the beginning of 2018 and has been serving the research community ever since. This homogeneous 60,000-core cluster, owned by the University of Toronto and operated by SciNet, was intended to enable large parallel jobs and has a measured performance of 3.02 petaflops, debuting at #53 in the June 2018 TOP500 list. It was designed to optimize throughput of a range of scientific codes running at scale, energy efficiency, and network and storage performance and capacity. It replaced two systems that SciNet operated for over 8 years, the Tightly Coupled System (TCS) and the General Purpose Cluster (GPC). In this paper we describe the transition process from these two systems, the procurement and deployment processes, as well as the unique features that make Niagara a one-of-a-kind machine in

Скрыть анонс
29.07.2019
05:20 Arxiv.org CSMassively Scaling Seismic Processing on Sunway TaihuLight Supercomputer. (arXiv:1907.11678v1 [cs.DC])

Common Midpoint (CMP) and Common Reflection Surface (CRS) are widely used methods for improving the signal-to-noise ratio in the field of seismic processing. These methods are computationally intensive and require high performance computing. This paper optimizes these methods on the Sunway many-core architecture and implements large-scale seismic processing on the Sunway Taihulight supercomputer. We propose the following three optimization techniques: 1) we propose a software cache method to reduce the overhead of memory accesses, and share data among CPEs via the register communication; 2) we re-design the semblance calculation procedure to further reduce the overhead of memory accesses; 3) we propose a vectorization method to improve the performance when processing the small volume of data within short loops. The experimental results show that our implementations of CMP and CRS methods on

Скрыть анонс
02:47 Zdnet.comNCI boasts Australia's fastest supercomputer with AU$70m Gadi system

Touted as Australia's most powerful supercomputer.

Скрыть анонс
25.07.2019
19:22 ScienceDaily.comSupercomputers use graphics processors to solve longstanding turbulence question

Advanced simulations have solved a problem in turbulent fluid flow that could lead to more efficient turbines and engines.

Скрыть анонс
17:30 Phys.orgSupercomputers use graphics processors to solve longstanding turbulence question

Advanced simulations have solved a problem in turbulent fluid flow that could lead to more efficient turbines and engines.

Скрыть анонс
16:07 Phys.orgEngineers discover lead-free perovskite semiconductor for solar cells using data analytics, supercomputers

Solar panel installations are on the rise in the U.S., with more than 2 million new installations in early 2019, the most ever recorded in a first quarter, according to a recent report by Solar Energy Industries Association and Wood Mackenzie Power & Renewables.

Скрыть анонс
17.07.2019
08:58 Arxiv.org Quantitative BiologyExtensible and Scalable Adaptive Sampling on Supercomputers. (arXiv:1907.06954v1 [q-bio.QM])

The accurate sampling of protein dynamics is an ongoing challenge despite the utilization of High-Performance Computers (HPC) systems. Utilizing only "brute force" MD simulations requires an unacceptably long time to solution. Adaptive sampling methods allow a more effective sampling of protein dynamics than standard MD simulations. Depending on the restarting strategy the speed up can be more than one order of magnitude. One challenge limiting the utilization of adaptive sampling by domain experts is the relatively high complexity to efficiently running it on HPC systems. We discuss how the ExTASY framework can set up new adaptive sampling strategies, and reliably execute resulting workflows at scale on HPC platforms. Here the folding dynamics of three small proteins is predicted with no a priori information.

Скрыть анонс
08:58 Arxiv.org PhysicsExtensible and Scalable Adaptive Sampling on Supercomputers. (arXiv:1907.06954v1 [q-bio.QM])

The accurate sampling of protein dynamics is an ongoing challenge despite the utilization of High-Performance Computers (HPC) systems. Utilizing only "brute force" MD simulations requires an unacceptably long time to solution. Adaptive sampling methods allow a more effective sampling of protein dynamics than standard MD simulations. Depending on the restarting strategy the speed up can be more than one order of magnitude. One challenge limiting the utilization of adaptive sampling by domain experts is the relatively high complexity to efficiently running it on HPC systems. We discuss how the ExTASY framework can set up new adaptive sampling strategies, and reliably execute resulting workflows at scale on HPC platforms. Here the folding dynamics of three small proteins is predicted with no a priori information.

Скрыть анонс
09.07.2019
01:53 ScienceDaily.comSupercomputer shows 'Chameleon Theory' could change how we think about gravity

Supercomputer simulations of galaxies have shown that Einstein's theory of General Relativity might not be the only way to explain how gravity works or how galaxies form.

Скрыть анонс
08.07.2019
21:12 Nanowerk.comSupercomputer shows 'Chameleon Theory' could change how we think about gravity

Supercomputer simulations of galaxies have shown that Einstein's theory of General Relativity might not be the only way to explain how gravity works or how galaxies form.

Скрыть анонс
19:51 Phys.orgSupercomputer shows 'Chameleon Theory' could change how we think about gravity

Supercomputer simulations of galaxies have shown that Einstein's theory of General Relativity might not be the only way to explain how gravity works or how galaxies form.

Скрыть анонс
04.07.2019
11:39 Technology.orgA new, more user friendly language for programming supercomputers

The personal computer revolution changed all that, providing most of us with readily accessible and cheaper gadgets that

Скрыть анонс
27.06.2019
07:04 Arxiv.org CSFatPaths: Routing in Supercomputers, Data Centers, and Clouds with Low-Diameter Networks when Shortest Paths Fall Short. (arXiv:1906.10885v1 [cs.NI])

We introduce FatPaths: a simple, generic, and robust routing architecture for Ethernet stacks. FatPaths enables state-of-the-art low-diameter topologies such as Slim Fly to achieve unprecedented performance, targeting both HPC supercomputers as well as data centers and clusters used by cloud computing. FatPaths exposes and exploits the rich ("fat") diversity of both minimal and non-minimal paths for high-performance multi-pathing. Moreover, FatPaths features a redesigned "purified" transport layer, based on recent advances in data center networking, that removes virtually all TCP performance issues (e.g., the slow start). FatPaths also uses flowlet switching, a technique used to prevent packet reordering in TCP networks, to enable very simple and effective load balancing. Our design enables recent low-diameter topologies to outperform powerful Clos designs, achieving 15% higher net throughput

Скрыть анонс
21.06.2019
16:23 GizmagUSA gains ground in supercomputer world rankings


The USA has gained ground in the world supercomputer rankings, with 116 supercomputers listed among the top 500 most powerful in the world. This is up from 109 in November 2018. China continues to dominate the list in terms of the number of installed supercomputers, with 219 top-500 supercomputers, though this is down from 227.
.. Continue Reading USA gains ground in supercomputer world rankings Category: Computers Tags: Computers Cray IBM Intel Supercomputer

Скрыть анонс
20.06.2019
17:56 ExtremeTech.comNvidia Built One of the Most Powerful AI Supercomputers in 3 Weeks

Autonomous vehicles aren't perfect, so to help upgrade their intelligence and prevent fatal accidents, Nvidia created the DGX SuperPod, an AI-optimized supercomputer that will help design a better self-driving car.
The post Nvidia Built One of the Most Powerful AI Supercomputers in 3 Weeks appeared first on ExtremeTech.

Скрыть анонс
19.06.2019
07:53 Technology.orgResearchers to Take Advantage of DOE’s Advanced Supercomputers

The U.S. Department of Energy announced today that it will invest $32 million over the next four years

Скрыть анонс
18.06.2019
22:33 Zdnet.comAll Linux, all the time: Supercomputers Top 500

The new list of the world's fastest computers is out and, once more, every last one runs Linux.

Скрыть анонс
19:14 TechnologyReview.comThe world’s best supercomputers are being updated to run AI software faster

Скрыть анонс
14:23 Phys.orgSupercomputers aid in novel simulations of gamma ray generation research

While intense magnetic fields are naturally generated by neutron stars, researchers have been striving to achieve similar results for many years. UC San Diego mechanical and aerospace engineering graduate student Tao Wang recently demonstrated how an extremely strong magnetic field, similar to that on the surface of a neutron star, can be not only generated but also detected using an X-ray laser inside a solid material.

Скрыть анонс
13:35 Zdnet.comIBM: We've made world's most powerful commercial supercomputer

French energy giant Total now has the world's 11th most powerful supercomputer in the Pangea III HPC from IBM.

Скрыть анонс
17.06.2019
19:33 Phys.orgFrontera named 5th fastest supercomputer in the world

The Frontera supercomputer at the Texas Advanced Computing Center (TACC) earned the #5 spot on the twice-annual Top 500 list , which ranks the world's most powerful non-distributed computer systems. Located at The University of Texas at Austin, the National Science Foundation (NSF)-supported Frontera is the fastest university supercomputer in the world.

Скрыть анонс
14:59 Zdnet.comPetaflop systems now dominate the supercomputer landscape

For the first time, it is only petaflop systems that have made the TOP500 list.

Скрыть анонс
12.06.2019
13:26 International Herald TribuneTo Fight Climate Change, We Need More Powerful Supercomputers

Accurate predictions of Earth’s warming require computers that are too expensive for one country or institution.

Скрыть анонс
13:15 NewYork TimesTo Fight Climate Change, We Need More Powerful Supercomputers

Accurate predictions of Earth’s warming require computers that are too expensive for one country or institution.

Скрыть анонс
11.06.2019
17:29 Phys.orgPreparing scientific applications for exascale computing

Exascale computers are soon expected to debut, including Frontier at the U.S. Department of Energy's (DOE) Oak Ridge Leadership Computing Facility (OLCF) and Aurora at the Argonne Leadership Computing Facility (ALCF), both DOE Office of Science User Facilities, in 2021. These next-generation computing systems are projected to surpass the speed of today's most powerful supercomputers by five to 10 times. This performance boost will enable scientists to tackle problems that are otherwise unsolvable in terms of their complexity and computation time.

Скрыть анонс
14:14 Zdnet.comMeet Europe's new supercomputer: MareNostrum 5 takes on global rivals for power

Barcelona will be home to the new supercomputer with 200 petaflops peak performance.

Скрыть анонс
18.05.2019
00:21 ScienceNewsDaily.orgHP Enterprise buying supercomputer star Cray

Hewlett Packard Enterprise (HPE) on Friday announced a $1.3 billion deal to buy supercomputer maker Cray, part of a move to expand into data analysis from connected devices .

Скрыть анонс
17.05.2019
23:31 NYT TechnologyHP Enterprise to Acquire Supercomputer Pioneer Cray

The price was relatively small, but the deal may have a big impact on the race between the United States and China to build more powerful computers.

Скрыть анонс
22:44 ExtremeTech.comHP Enterprise Buys Supercomputer Pioneer Cray for $1.3B

HPE has bought Cray, the OG supercomputer manufacturer, for $1.3B. The deal is expected to boost HPE's own HPC business segment.
The post HP Enterprise Buys Supercomputer Pioneer Cray for $1.3B appeared first on ExtremeTech.

Скрыть анонс
20:43 Phys.orgHP Enterprise buying supercomputer star Cray

Hewlett Packard Enterprise (HPE) on Friday announced a $1.3 billion deal to buy supercomputer maker Cray, part of a move to expand into data analysis from connected devices .

Скрыть анонс
17:39 Zdnet.comHPE buys supercomputer company Cray for $1.3 billion

At the core of the deal is Cray's high-performance computing (HPC) technology, which HPE wants to offer as a future HPC-as-a-Service platform.

Скрыть анонс
16:49 Zdnet.comIs HPE about to buy supercomputer maker Cray?

HPE could soon acquire its way to the world of exascale supercomputing.

Скрыть анонс
15:20 ReutersHewlett Packard Enterprise to buy supercomputer maker Cray in $1.30 billion deal

Supercomputer manufacturer Cray Inc said on Friday it would be bought by Hewlett Packard Enterprise Co in a deal valued at about $1.30 billion.

Скрыть анонс
15:07 ReutersHewlett Packard Enterprise to buy supercomputer maker Cray in $1.30 billion deal

Supercomputer manufacturer Cray Inc said on Friday it would be bought by Hewlett Packard Enterprise Co in a deal valued at about $1.30 billion.

Скрыть анонс
14:45 Zdnet.comIs HPE about to buy supercomputer maker Cray?

HPE could soon acquire its way to the world of exascale supercomputing.

Скрыть анонс
13.05.2019
20:56 Technology.orgSupercomputer Simulations Show Black Holes and Their Magnetic Bubbles

When the Event Horizon Telescope team released the first picture ever taken of a black hole in mid-April, the general

Скрыть анонс
19:18 LiveScience.comNew Supercomputer Will Span Continents, Outrace World's Fastest

Paired processors on two continents will power a new computer "brain."

Скрыть анонс
08:47 Zdnet.comSquare Kilometre Array supercomputer design completed

Design work on the 'brain of the SKA', one of two supercomputers, has been completed.

Скрыть анонс
10.05.2019
06:56 Arxiv.org PhysicsSPH-EXA: Enhancing the Scalability of SPH codes Via an Exascale-Ready SPH Mini-App. (arXiv:1905.03344v1 [physics.comp-ph])

Numerical simulations of fluids in astrophysics and computational fluid dynamics (CFD) are among the most computationally-demanding calculations, in terms of sustained floating-point operations per second, or FLOP/s. It is expected that these numerical simulations will significantly benefit from the future Exascale computing infrastructures, that will perform 10^18 FLOP/s. The performance of the SPH codes is, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. In this work an extensive study of three SPH implementations SPHYNX, ChaNGa, and XXX is performed, to gain insights and to expose any limitations and characteristics of the codes. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. We implemented a rotating square

Скрыть анонс
06:09 Arxiv.org CSSPH-EXA: Enhancing the Scalability of SPH codes Via an Exascale-Ready SPH Mini-App. (arXiv:1905.03344v1 [physics.comp-ph])

Numerical simulations of fluids in astrophysics and computational fluid dynamics (CFD) are among the most computationally-demanding calculations, in terms of sustained floating-point operations per second, or FLOP/s. It is expected that these numerical simulations will significantly benefit from the future Exascale computing infrastructures, that will perform 10^18 FLOP/s. The performance of the SPH codes is, in general, adversely impacted by several factors, such as multiple time-stepping, long-range interactions, and/or boundary conditions. In this work an extensive study of three SPH implementations SPHYNX, ChaNGa, and XXX is performed, to gain insights and to expose any limitations and characteristics of the codes. These codes are the starting point of an interdisciplinary co-design project, SPH-EXA, for the development of an Exascale-ready SPH mini-app. We implemented a rotating square

Скрыть анонс
09.05.2019
09:21 GizmagWorld s fastest supercomputer will heat up the race to the exascale era


Supercomputers are due to take a huge leap forward when the "exascale" era kicks off in 2021 with the launch of Aurora. But now it looks like that world-leading machine will be usurped before it's even set up. The Frontier system has just been announced, which will boast the power of over 1.5 exaflops.
.. Continue Reading World's fastest supercomputer will heat up the race to the exascale era Category: Computers Tags: AMD Computer Computers Cray Oak Ridge National Laboratory Supercomputer US Department of Energy

Скрыть анонс
08.05.2019
18:14 Phys.orgAMD's tech to power new supercomputer for Department of Energy

Advanced Micro Devices announced Tuesday that its technology will help power a new supercomputer at Tennessee-based Oak Ridge National Laboratory in 2021.

Скрыть анонс
07.05.2019
17:25 Zdnet.comAMD, Cray to build 1.5 exaFlop Frontier supercomputer for Oak Ridge National Lab

Frontier is set to become the world's fastest supercomputer when it arrives at the lab in 2021.

Скрыть анонс
14:27 ScienceNewsDaily.orgAMD and Cray are building the 'world's most powerful supercomputer'

The US may be set to hang onto the crown of having the world's most powerful supercomputer for some time. Cray Computing and AMD are building an exascale machine with the Oak Ridge ...

Скрыть анонс
03.05.2019
20:39 WhatReallyHappened.com“THE HAMMER” — Ultra-secret Supercomputer System Used by CIA/NSA to ‘Wiretap’ Trump

President Obama’s Director of National Intelligence (DNI) James Clapper and his Central Intelligence Agency (CIA) director John Brennan oversaw a secret supercomputer system known as “THE HAMMER,” according to former NSA/CIA contractor-turned whistleblower Dennis Montgomery.
Clapper and Brennan were using the supercomputer system to conduct illegal and unconstitutional government data harvesting and wiretapping. THE HAMMER was installed on federal property in Fort Washington, Maryland at a complex which some speculate is a secret CIA and NSA operation operating at a US Naval facility.
President Trump’s allegation that the Obama Administration was wiretapping him is not only supported by Montgomery’s whistleblower revelations about Brennan’s and Clapper’s computer system THE HAMMER, but also by statements made this week by William Binney, a former NSA Technical Director of the World

Скрыть анонс
01.05.2019
00:23 ScienceDaily.comNovel software to balance data processing load in supercomputers to be presented

The modern-age adage "work smarter, not harder" stresses the importance of not only working to produce, but also making efficient use of resources.

Скрыть анонс
29.04.2019
07:13 Arxiv.org CSA Benchmarking Study to Evaluate Apache Spark on Large-Scale Supercomputers. (arXiv:1904.11812v1 [cs.DC])

As dataset sizes increase, data analysis tasks in high performance computing (HPC) are increasingly dependent on sophisticated dataflows and out-of-core methods for efficient system utilization. In addition, as HPC systems grow, memory access and data sharing are becoming performance bottlenecks. Cloud computing employs a data processing paradigm typically built on a loosely connected group of low-cost computing nodes without relying upon shared storage and/or memory. Apache Spark is a popular engine for large-scale data analysis in the cloud, which we have successfully deployed via job submission scripts on production clusters.
In this paper, we describe common parallel analysis dataflows for both Message Passing Interface (MPI) and cloud based applications. We developed an effective benchmark to measure the performance characteristics of these tasks using both types of systems,

Скрыть анонс
07:13 Arxiv.org CSShall numerical astrophysics step into the era of Exascale computing?. (arXiv:1904.11720v1 [astro-ph.IM])

High performance computing numerical simulations are today one of the more effective instruments to implement and study new theoretical models, and they are mandatory during the preparatory phase and operational phase of any scientific experiment. New challenges in Cosmology and Astrophysics will require a large number of new extremely computationally intensive simulations to investigate physical processes at different scales. Moreover, the size and complexity of the new generation of observational facilities also implies a new generation of high performance data reduction and analysis tools pushing toward the use of Exascale computing capabilities. Exascale supercomputers cannot be produced today. We discuss the major technological challenges in the design, development and use of such computing capabilities and we will report on the progresses that has been made in the last years in Europe,

Скрыть анонс
23.04.2019
10:22 Arxiv.org PhysicsStatus and Future Perspectives for Lattice Gauge Theory Calculations to the Exascale and Beyond. (arXiv:1904.09725v1 [hep-lat])

In this and a set of companion whitepapers, the USQCD Collaboration lays out a program of science and computing for lattice gauge theory. These whitepapers describe how calculation using lattice QCD (and other gauge theories) can aid the interpretation of ongoing and upcoming experiments in particle and nuclear physics, as well as inspire new ones.

Скрыть анонс
10.04.2019
03:16 Zdnet.comSouth Australia to house Defence's new AU$68m supercomputer centre

Construction of the Defence High Performance Computing Centre will begin later this year.

Скрыть анонс
04.04.2019
10:45 Arxiv.org StatisticsDeep Reinforcement Learning on a Budget: 3D Control and Reasoning Without a Supercomputer. (arXiv:1904.01806v1 [cs.LG])

An important goal of research in Deep Reinforcement Learning in mobile robotics is to train agents capable of solving complex tasks, which require a high level of scene understanding and reasoning from an egocentric perspective. When trained from simulations, optimal environments should satisfy a currently unobtainable combination of high-fidelity photographic observations, massive amounts of different environment configurations and fast simulation speeds. In this paper we argue that research on training agents capable of complex reasoning can be simplified by decoupling from the requirement of high fidelity photographic observations. We present a suite of tasks requiring complex reasoning and exploration in continuous, partially observable 3D environments. The objective is to provide challenging scenarios and a robust baseline agent architecture that can be trained on mid-range consumer

Скрыть анонс
10:45 Arxiv.org CSDeep Reinforcement Learning on a Budget: 3D Control and Reasoning Without a Supercomputer. (arXiv:1904.01806v1 [cs.LG])

An important goal of research in Deep Reinforcement Learning in mobile robotics is to train agents capable of solving complex tasks, which require a high level of scene understanding and reasoning from an egocentric perspective. When trained from simulations, optimal environments should satisfy a currently unobtainable combination of high-fidelity photographic observations, massive amounts of different environment configurations and fast simulation speeds. In this paper we argue that research on training agents capable of complex reasoning can be simplified by decoupling from the requirement of high fidelity photographic observations. We present a suite of tasks requiring complex reasoning and exploration in continuous, partially observable 3D environments. The objective is to provide challenging scenarios and a robust baseline agent architecture that can be trained on mid-range consumer

Скрыть анонс
01.04.2019
21:34 Technology.orgSupercomputers Help Supercharge Protein Assembly

Red blood cells are amazing. They pick up oxygen from our lungs and carry it all over our

Скрыть анонс
06:54 ScienceDaily.comScientists develop way to perform supercomputer simulations of the heart on cellphones

You can now perform supercomputer simulations of the heart's electrophysiology in real time on desktop computers and even cellphones. A team of scientists developed a new approach that can not only help diagnose heart conditions and test new treatments, but pushes the boundaries of cardiac science by opening up a floodgate of new cardiac research and education.

Скрыть анонс
30.03.2019
02:03 ScienceDaily.comSupercomputers help supercharge protein assembly

Using proteins derived from jellyfish, scientists assembled a complex sixteen protein structure composed of two stacked octamers by supercharging alone. This research could be applied to useful technologies such as pharmaceutical targeting, artificial energy harvesting, 'smart' sensing and building materials, and more. Computational modeling through XSEDE allocations on Stampede2 (TACC) and Comet (SDSC) refined measurements of structure.

Скрыть анонс
29.03.2019
23:52 Phys.orgSupercomputers help supercharge protein assembly

Red blood cells are amazing. They pick up oxygen from our lungs and carry it all over our body to keep us alive. The hemoglobin molecule in red blood cells transports oxygen by changing its shape in an all-or-nothing fashion. Four copies of the same protein in hemoglobin open and close like flower petals, structurally coupled to respond to each other. Using supercomputers, scientists are just starting to design proteins that self-assemble to combine and resemble life-giving molecules like hemoglobin. The scientists say their methods could be applied to useful technologies such as pharmaceutical targeting, artificial energy harvesting, 'smart' sensing and building materials, and more.

Скрыть анонс
26.03.2019
20:01 LiveScience.comSupercomputers Solve a Mystery Hidden Inside Merging Water Droplets

Weird things happen when water droplets smash into each other.

Скрыть анонс
19:02 Technology.orgSupercomputer Simulations Help Combat Tuberculosis (TB) Granulomas

The greatest cause of death due to infection globally is tuberculosis (TB). Two supercomputers – Comet at the San Diego

Скрыть анонс
15:05 Zdnet.comEurope's big weather supercomputer data center is about to leave UK

The European Centre for Medium-Range Weather Forecasts is setting up its HPC data center in Bologna, Italy.

Скрыть анонс
25.03.2019
17:06 SingularityHub.ComIntel Is Building the World’s Most Powerful Supercomputer

A supercomputer capable of a quintillion operations a second will go online in 2021 after the US government handed Intel and supercomputer manufacturer Cray a contract to build an exascale computer called Aurora. This machine is being built from the bottom up to run AI at unprecedented scales. Today’s most powerful supercomputers measure their performance […]

Скрыть анонс
20.03.2019
18:27 ScienceDaily.comSupercomputer simulations shed light on how liquid drops combine

High performance computing has revealed in detail how liquid droplets combine, in a development with applications such as improving 3D printing technologies or the forecasting of thunderstorms.

Скрыть анонс
16:44 Phys.orgSupercomputer sheds light on how droplets merge

Scientists have revealed the precise molecular mechanisms that cause drops of liquid to combine, in a discovery that could have a range of applications.

Скрыть анонс
15:08 Phys.orgSupercomputers to help supercharge ceramic matrix composite manufacturing

New software capabilities developed by computational scientists at the US Department of Energy's (DOE's) Oak Ridge National Laboratory (ORNL) and the Rolls Royce Corporation could soon help engineers lift the gas turbine engines of aircraft and power plants to higher efficiencies.

Скрыть анонс
19.03.2019
22:41 WhatReallyHappened.comUS government teams up with Intel and Cray on $500 million plan to build Project Aurora supercomputer capable of completing 1 quadrillion calculations PER SECOND

A U.S. government-led group is working with chipmaker Intel Corp and Cray Inc to develop and build the nation's fastest computer by 2021 for conducting nuclear weapons and other research, officials said on Monday.
The Department of Energy and the Argonne National Laboratory near Chicago said they are working on a supercomputer dubbed Aurora with Intel, the world's biggest supplier of data center chips, and Cray, which specializes in the ultra-fast machines.

Скрыть анонс
16:57 Telegraph.co.ukUS to create world's most powerful supercomputer capable of 1 quintillion calculations per second

www.telegraph.co.uk for the latest news from the UK and around the world.

Скрыть анонс
16:02 TechnologyReview.comThe US is building a $500m ‘exascale’ computer that will be the world’s most powerful

Скрыть анонс
15:14 Phys.orgNew Argonne supercomputer, built for next-gen AI, will be most powerful in U.S.

The most powerful computer ever built in the United States will make its home at Argonne National Laboratory in 2021, the U.S. Department of Energy and Intel announced today. Aurora, the United States' first exascale computer, will combine unprecedented processing power with the growing potential of artificial intelligence to help solve the world's most important and complex scientific challenges.

Скрыть анонс
14:52 ExtremeTech.comIntel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’

Intel and the DOE have announced the first exascale computer expected to be deployed. Codenamed Aurora, the system should be ready by 2021.
The post Intel, DOE Announce First-Ever Exascale Supercomputer ‘Aurora’ appeared first on ExtremeTech.

Скрыть анонс
12:04 NewYork TimesRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
12:03 International Herald TribuneRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
10:34 Technology.orgU.S. Department of Energy and Intel to deliver first exascale supercomputer

Targeted for 2021 delivery, the Argonne National Laboratory supercomputer will enable high-performance computing and artificial intelligence at exascale.

Скрыть анонс
10:16 Ixbt.com Неанонсированные ускорители Intel Xe лягут в основу Aurora — первого суперкомпьютера класса Exascale

Компания Intel на своём сайте опубликовала пресс-релиз, в котором рассказала о том, что совместно с Министерством энергетики США готовится в 2021 представить первый суперкомпьютер класса Exascale, то есть с производительностью свыше 1 exaFLOPS.
Суперкомпьютер получил имя Aurora и разместится в Аргоннской национальной лаборатории. Контракт в целом оценивается в 500 млн долларов.
Но самое интересное — основа суперкомпьютера. Aurora будет включать «новые технологии Intel, разработанные специально для конвергенции искусственного интеллекта и высокопроизводительных вычислений в экстремальных масштабах». К ним относятся в том числе решения на вычислительной архитектуре Intel Xe. Несмотря на то, что в итоге нам рассказали, что Intel Xe — это не бренд, а название процесса перехода компании от энергоэффективной архитектуры GPU к масштабируемой, в данном случае процессорный

Скрыть анонс
05:22 Gizmag Intel's next-gen supercomputer to usher in exascale era in 2021


The next generation of supercomputers has an official start date. Intel and the US Department of Energy (DOE) are teaming up to deliver the world's first exascale supercomputer in 2021, giving a huge boost to many different fields of research. Named Aurora, the new system will be a thousand times more powerful than the petascale generation that began in 2008 and is still in wide use today.
.. Continue Reading Intel's next-gen supercomputer to usher in exascale era in 2021 Category: Computers Tags: Computers Cray Data Deep Learning Intel Supercomputer US Department of Energy

Скрыть анонс
18.03.2019
23:37 Zdnet.comU.S. Department of Energy plans exaFlop supercomputer in 2021

The effort will leverage Cray's Shasta supercomputing platform as well as Intel technology.

Скрыть анонс
22:51 ScienceNewsDaily.orgAmerica’s first exascale supercomputer to be built by 2021

Details of America’s next-generation supercomputer were revealed at a ceremony attended by Secretary of Energy Rick Perry and Senator Dick Durbin at Argonne National Laboratory ...

Скрыть анонс
22:16 NYT TechnologyRacing Against China, U.S. Reveals Details of $500 Million Supercomputer

Lab officials predict it will be the first American machine to reach a milestone called “exascale” performance, surpassing a quintillion calculations per second.

Скрыть анонс
15.03.2019
15:45 Phys.orgHandling trillions of supercomputer files just got simpler

A new distributed file system for high-performance computing available today via the software collaboration site GitHub provides unprecedented performance for creating, updating and managing extreme numbers of files.

Скрыть анонс
05.03.2019
15:03 LiveScience.comPhysicists Used Supercomputers to Map the Bone-Crushing Pressures Hiding Inside Protons

If you shrank yourself down and entered a proton, you'd experience among the most intense pressures found anywhere in the universe.

Скрыть анонс
21.02.2019
15:02 Technology.orgDTU boasts top-performing supercomputers

Over a five-year period, DTU will invest close to EUR 9.4 million (DKK 70 million) in upgrading and

Скрыть анонс
11:41 Arxiv.org CS'Zhores' -- Petaflops supercomputer for data-driven modeling, machine learning and artificial intelligence installed in Skolkovo Institute of Science and Technology. (arXiv:1902.07490v1 [cs.DC])

The Petaflops supercomputer "Zhores" recently launched in the "Center for Computational and Data-Intensive Science and Engineering" (CDISE) of Skolkovo Institute of Science and Technology (Skoltech) opens up new exciting opportunities for scientific discoveries in the institute especially in the areas of data-driven modeling, machine learning and artificial intelligence. This supercomputer utilizes the latest generation of Intel and NVidia processors to provide resources for the most compute intensive tasks of the Skoltech scientists working in digital pharma, predictive analytics, photonics, material science, image processing, plasma physics and many more. Currently it places 6th in the Russian and CIS TOP-50 (2018) supercomputer list. In this article we summarize the cluster properties and discuss the measured performance and usage modes of this scientific instrument in

Скрыть анонс
19.02.2019
09:35 Arxiv.org CSENBB Processor: Towards the ExaScale Numerical Brain Box [Position Paper]. (arXiv:1902.06655v1 [cs.AR])

ExaScale systems will be a key driver for simulations that are essential for advance of science and economic growth. We aim to present a new concept of microprocessor for floating-point computations useful for being a basic building block of ExaScale systems and beyond. The proposed microprocessor architecture has a frontend for programming interface based on the concept of event-driven simulation. The user program is executed as an event-driven simulation using a hardware/software co-designed simulator. This is the flexible part of the system. The back-end exploits the concept of uniform topology as in a brain: a massive packet switched interconnection network with flit credit-based flow control with virtual channels that incorporates seamlessly communication, arithmetic and storage. Floating-point computations are incorporated as on-line arithmetic operators in the output ports of the

Скрыть анонс
24.01.2019
22:52 Phys.orgPhysicists use supercomputers and AI to create the most accurate model yet of black hole mergers

One of the most cataclysmic events to occur in the cosmos involves the collision of two black holes. Formed from the deathly collapse of massive stars, black holes are incredibly compact—a person standing near a stellar-mass black hole would feel gravity about a trillion times more strongly than they would on Earth. When two objects of this extreme density spiral together and merge, a fairly common occurrence in space, they radiate more power than all the stars in the universe.

Скрыть анонс
02:49 WhatReallyHappened.comIRS Becoming Big Brother With $99-Million Supercomputer – will give the agency the “unprecedented ability to track the lives and transactions of tens of millions of American citizens”

Скрыть анонс
07.01.2019
15:09 AzoRobotics.comMaximum Computing Power and Flexibility with AI-Capable Supercomputer ZF ProAI

ZF launched the newest model of its automotive supercomputer ZF ProAI right before the start of the 2019 Consumer Electronics Show (CES). The ZF ProAI RoboThink central control unit offers the maximum...

Скрыть анонс
03.01.2019
18:36 WhatReallyHappened.comThis million-core supercomputer inspired by the human brain breaks all the rules

For all their fleshly failings, human brains are the model that computer engineers have always sought to emulate: huge processing power that's both surprisingly energy efficient, and available in a tiny form factor. But late last year, in an unprepossessing former metal works in Manchester, one machine became the closest thing to an artificial human brain there is.
The one-million core SpiNNaker -- short for Spiking Neural Network Architecture -- is the culmination of decades of work and millions of pounds of investment. The result: a massively parallel supercomputer designed to mimic the workings of the human brain, which it's hoped will give neuroscientists a new understanding of how the mind works and open up new avenues of medical research.

Скрыть анонс
15:52 Zdnet.comThis million-core supercomputer inspired by the human brain breaks all the rules

SpiNNaker's spiking neural network mimics the human brain, and could fuel breakthroughs in robotics and health.

Скрыть анонс
17.12.2018
19:05 Phys.orgTeam wins major supercomputer time to study the edge of fusion plasmas

The U.S. Department of Energy (DOE) has awarded major computer hours on three leading supercomputers, including the world's fastest, to a team led by C.S. Chang of the DOE's Princeton Plasma Physics Laboratory (PPPL). The team is addressing issues that must be resolved for successful operation of ITER, the international experiment under construction in France to demonstrate the feasibility of producing fusion energy—the power that drives the sun and stars—in a magnetically controlled fusion facility called a "tokamak."

Скрыть анонс
12.12.2018
15:11 Zdnet.comThe rise, fall, and rise of the supercomputer in the cloud era

Though the personal computer was born from garage projects, the supercomputer had been declining to the back of the garage. That's until a handful of trends conspired to poke the reset button for the industry. Now the race is back on.

Скрыть анонс
10.12.2018
14:44 Phys.orgSupercomputers without waste heat

Generally speaking, magnetism and the lossless flow of electrical current ("superconductivity") are competing phenomena that cannot coexist in the same sample. However, for building supercomputers, synergetically combining both states comes with major advantages as compared to today's semiconductor technology, characterized by high power consumption and heat production. Researchers from the Department of Physics at the University of Konstanz have now demonstrated that the lossless electrical transfer of magnetically encoded information is possible. This finding enables enhanced storage density on integrated circuit chips and significantly reduces the energy consumption of computing centres. The results of this study have been published in the current issue of the scientific journal Nature Communications.

Скрыть анонс
07.12.2018
22:48 ScienceDaily.comSupercomputers without waste heat

Physicists explore superconductivity for information processing.

Скрыть анонс
18:18 Nanowerk.comSupercomputers without waste heat

Physicists explore superconductivity for information processing.

Скрыть анонс
06.12.2018
17:16 Phys.orgLIGO supercomputer upgrade will speed up groundbreaking astrophysics research

In 2016, an international team of scientists found definitive evidence—tiny ripples in space known as gravitational waves—to support one of the last remaining untested predictions of Einstein's theory of general relativity. The team used the Laser Interferometer Gravitational-Wave Observatory (LIGO), which has since made several gravitational wave discoveries. Each discovery was possible in part because of a global network of supercomputer clusters, one of which is housed at Penn State. Researchers use this network, known as the LIGO Data Grid, to analyze the gravitational wave data.

Скрыть анонс
05.12.2018
18:17 Telegraph.co.ukUK supercomputer gives African farmers early warning of pests and blights  

Скрыть анонс
07:56 Arxiv.org PhysicsPushing Back the Limit of Ab-initio Quantum Transport Simulations on Hybrid Supercomputers. (arXiv:1812.01396v1 [physics.comp-ph])

The capabilities of CP2K, a density-functional theory package and OMEN, a nano-device simulator, are combined to study transport phenomena from first-principles in unprecedentedly large nanostructures. Based on the Hamiltonian and overlap matrices generated by CP2K for a given system, OMEN solves the Schroedinger equation with open boundary conditions (OBCs) for all possible electron momenta and energies. To accelerate this core operation a robust algorithm called SplitSolve has been developed. It allows to simultaneously treat the OBCs on CPUs and the Schroedinger equation on GPUs, taking advantage of hybrid nodes. Our key achievements on the Cray-XK7 Titan are (i) a reduction in time-to-solution by more than one order of magnitude as compared to standard methods, enabling the simulation of structures with more than 50000 atoms, (ii) a parallel efficiency of 97% when scaling from 756 up to

Скрыть анонс
01.12.2018
00:30 ScienceDaily.comA new way to see stress -- using supercomputers

Supercomputer simulations show that at the atomic level, material stress doesn't behave symmetrically. Widely-used atomic stress formulae significantly underestimate stress near stress concentrators such as dislocation core, crack tip, or interface, in a material under deformation. Supercomputers simulate force interactions of Lennard-Jones perfect single crystal of 240,000 atoms. Study findings could help scientists design new materials such as glass or metal that doesn't ice up.

Скрыть анонс
First← Previous1234567Previous →Last