Nanoracks just booked a SpaceX launch to demo tech that turns used spacecraft into orbital habitats

SpaceX is going to launch a payload for client Nanoracks aboard one of its new rideshare missions, currently targeting late 2020, that will demonstrate a very ambitious piece of tech from the commercial space station company. Nanoracks is sending up a payload platform that will show off how it can use a robot to cut material very similar to the upper stages used in orbital spacecraft — something Nanoracks wants to eventually due to help convert these spent and discarded stages (sometimes called &space tugs& because they generally move payloads from one area of orbit to another) into orbital research stations, habitats and more.

The demonstration mission is part of Nanoracks& &Space Outpost Program,& which aims to address the future need for in-space orbital commercial platforms by also simultaneously making use of existing vehicles and materials designed specifically for space. Through use of the upper stages of spacecraft left behind in orbit, the company hopes to show how it one day might be able to greatly reduce the costs of setting up in-space stations and habitats, broadening the potential access of these kinds of facilities for commercial space companies.

This will be the first-ever demonstration of structural metal cutting in space, provided the demo goes as planned, and it could be a key technology not just for establishing more permanent research families in Earthorbit, but also for setting up infrastructure to help us get to, and stay at, other interstellar destinations like the Moon and Mars.

Nanoracks has a track record of delivering when it comes to space station technology: Itthe first company to own and operate its own hardware on the International Space Station, and it has accomplished a lot since its founding in 2009. This demo mission is also funded via a contract in place with NASA.

Also going up on the same mission is a payload of eight Spire LEMUR-2 CubeSats, which Nanoracks ordered on behalf of the global satellite operator. That late 2020 date is subject to change, as are most of the long-tail SpaceX missions, but whenever it takes place, it&ll be a key moment in commercial space history to watch.

Write comment (91 Comments)

The scale of supercomputing has grown almost too large to comprehend, with millions of compute units performing calculations at rates requiring, for first time, the exa prefix — denoting quadrillions per second. How was this accomplished? With careful planning… and a lot of wires, say two people close to the project.

Having noted the news that Intel and Argonne National Lab were planning to take the wrapper off a new exascale computer called Aurora (one of several being built in the U.S.) earlier this year, I recently got a chance to talk with Trish Damkroger, head of Intel Extreme Computing Organization, and Rick Stevens, Argonneassociate lab director for computing, environment and life sciences.

The two discussed the technical details of the system at the Supercomputing conference in Denver, where, probably, most of the people who can truly say they understand this type of work already were. So while you can read at industry journals and the press release about the nuts and bolts of the system, including Intelnew Xe architecture and Ponte Vecchio general-purpose compute chip, I tried to get a little more of the big picture from the two.

Intel and Cray are building a $500 million ‘exascale& supercomputer for Argonne National Lab

It should surprise no one that this is a project long in the making — but you might not guess exactly how long: more than a decade. Part of the challenge, then, was to establish computing hardware that was leagues beyond what was possible at the time.

&Exascale was first being started in 2007. At that time we hadn&t even hit the petascale target yet, so we were planning like three to four magnitudes out,& said Stevens. &At that time, if we had exascale, it would have required a gigawatt of power, which is obviously not realistic. So a big part of reaching exascale has been reducing power draw.&

Intelsupercomputing-focused Xe architecture is based on a 7-nanometer process, pushing the very edge of Newtonian physics — much smaller and quantum effects start coming into play. But the smaller the gates, the less power they take, and microscopic savings add up quickly when you&re talking billions and trillions of them.

But that merely exposes another problem: If you increase the power of a processor by 1000x, you run into a memory bottleneck. The system may be able to think fast, but if it can&t access and store data equally fast, thereno point.

&By having exascale-level computing, but not exabyte-level bandwidth, you end up with a very lopsided system,& said Stevens.

And once you clear both those obstacles, you run into a third: whatcalled concurrency. High performance computing is equally about synchronizing a task between huge numbers of computing units as it is about making those units as powerful as possible. The machine operates as a whole, and as such every part must communicate with every other part — which becomes something of a problem as you scale up.

&These systems have many thousands of nodes, and the nodes have hundreds of cores, and the cores have thousands of computation units, so therelike, billion-way concurrency,& Stevens explained. &Dealing with that is the core of the architecture.&

How they did it, I, being utterly unfamiliar with the vagaries of high performance computing architecture design, would not even attempt to explain. But they seem to have done it, as these exascale systems are coming online. The solution, I&ll only venture to say, is essentially a major advance on the networking side. The level of sustained bandwidth between all these nodes and units is staggering.

Intel and Argonne National Lab on ‘exascale& and their new Aurora supercomputer

Making exascale accessible

While even in 2007 you could predict that we&d eventually reach such low-power processes and improved memory bandwidth, other trends would have been nearly impossible to predict — for example, the exploding demand for AI and machine learning. Back then it wasn&t even a consideration, and now it would be folly to create any kind of high performance computing system that wasn&t at least partially optimized for machine learning problems.

&By 2023 we expect AI workloads to be a third of the overall HPC server market,& said Damkroger. &This AI-HPC convergence is bringing those two workloads together to solve problems faster and provide greater insight.&

To that end the architecture of the Aurora system is built to be flexible while retaining the ability to accelerate certain common operations, for instance the type of matrix calculations that make up a great deal of certain machine learning tasks.

&But itnot just about performance, it has to be about programmability,& she continued. &One of the big challenges of an exacale machine is being able to write software to use that machine. oneAPI is going to be a unified programming model — itbased on an open standard of Open Parallel C++, and thatkey for promoting use in the community.&

Summit, as of this writing the most powerful single computing system in the world, is very dissimilar to many of the systems developers are used working on. If the creators of a new supercomputer want it to have broad appeal, they need to bring it as close to being like a &normal& computer to operate as possible.

&Itsomething of a challenge to bring x86-based packages to Summit,& Stevens noted. &The big advantage for us is that, because we have x86 nodes and Intel GPUs, this thing is basically going to run every piece of software that exists. It&ll run standard software, Linux software, literally millions of apps.&

I asked about the costs involved, since itsomething of a mystery with a system like this how that a half-billion dollar budget gets broken down. Really I just thought it would be interesting to know how much of it went to, say, RAM versus processing cores, or how many miles of wire they had to run. Though both Stevens and Damkroger declined to comment, the former did note that &the backlink bandwidth on this machine is many times the total of the entire internet, and that does cost something.& Make of that what you will.

Aurora, unlike its cousin El Capitan at Lawrence Livermore National Lab, will not be used for weapons development.

$600M Cray supercomputer will tower above the rest — to build better nukes

&Argonne is a science lab, and itopen, not classified science,& said Stevens. &Our machine is a national user resource; We have people using it from all over the country. A large amount of time is allocated via a process thatpeer reviewed and priced to accommodate the most interesting projects. About two thirds is that, and the other third Department of Energy stuff, but still unclassified problems.&

Initial work will be in climate science, chemistry, and data science, with 15 teams between them signed up for major projects to be run on Aurora — details to be announced soon.

Write comment (91 Comments)

The practice of Chaos Engineering developed at Amazon and Netflix a decade ago to help those web scale companies test their complex systems for worst-case scenarios before they happened. Gremlin was started by a former employee of both these companies to make it easier to perform this type of testing without a team of Site Reliability Engineers (SREs). Today, the company announced that it now supports Chaos Engineering-style testing on Kubernetes clusters.

The company made the announcement at the beginning of KubeCon, the Kubernetes conference taking place in San Diego this week.

Gremlin co-founder and CEO Kolton Andrus says that the idea is to be able to test and configure Kubernetes clusters so they will not fail, or at least reduce the likelihood. He says to do this itcritical to run chaos testing (tests of mission-critical systems under extreme duress) in live environments, whether you&re testing Kubernetes clusters or anything else, but italso a bit dangerous to do be doing this. He says to mitigate the risk, best practices suggest that you limit the experiment to the smallest test possible that gives you the most information.

&We can come in and say I&m going to deal with just these clusters. I want to cause failure here to understand what happens in Kubernetes when these pieces fail. For instance, being able to see what happens when you pause the scheduler. The goal is being able to help people understand this concept of the blast radius, and safely guide them to running an experiment,& Andrus explained.

In addition, Gremlin is helping customers harden their Kubernetes clusters to help prevent failures with a set of best practices. &We clearly have the tooling that people need [to conduct this type of testing], but we&ve also learned through many, many customer interactions and experiments to help them really tune and configure their clusters to be fault tolerant and resilient,& he said.

The Gremlin interface is designed to facilitate this kind of targeted experimentation. You can check the areas you want to apply a test, and you can see graphically which parts of the system are being tested. If things get out of control, there is a kill switch to stop the tests.

Gremlin brings Chaos Engineering as a Service to Kubernetes

Gremlin Kubernetes testing screen (Screenshot: Gremlin)

Gremlin launched in 2016. Its headquarters are in San Jose. It offers both a freemium and pay product. The company has raised almost $27 million, according to Crunchbase data.

How you react when your systems fail may define your business

Write comment (96 Comments)
Technics EAH-F70N

Panasonic brought back its classic Technics brand some four years ago now, and a whole new range of audio goodies was announced at the beginning of 2019, but Australia was sorely missing out on the action.

Thankfully, the company has now finally announced that it will be officially selling a selection of those new vinyl turntables, headphones and

Write comment (93 Comments)
A new Half-Life game is in development, Valve confirmsA new Half-Life game is in development, Valve confirms

We had heard whispers, and it turns out the rumors were true – game developer Valve is set to release Half-Life: Alyx.

In a tweet from the official Valve Twitter account, the company described the upcoming release as its “flagship VR game”.

So far, information about the new game is scant, but Valve will unveil further details about Half-Life: Alyx

Write comment (92 Comments)
In surprise move, Disney Plus offers heaps more movies in Australia than the USIn surprise move, Disney Plus offers heaps more movies in Australia than the US

Australians are used to getting the short end of the stick when it comes to the size of their streaming content libraries, often missing out on movies and TV shows available to US subscribers – yes, we're talking about you, Netflix. 

However, things appear to be quite the opposite when it comes to Disney Plus, which officially launched in Australia

Write comment (97 Comments)