David Pratten is passionate about leading IT-related change projects for social good.
1558 stories
·
0 followers

Review: Manage your project portfolio

1 Share

9781680501759-480x600Johanna Rothman wrote the book Manage your portfolio Second edition – Increase your capacity – Finish more projects. A book for the more professional portfolio manager or as mentioned on the back of the book: Expert skill level.

If you are facing too many projects, if firefighting and multitasking are keeping you from finishing any of them, this book will help you to manage your portfolio. It makes use of agile and lean ways of working and brings the biggest benefits when you are running your projects in an agile way too. Projects become to be delivered features and evaluation means prioritizing feature sets. In the quick reference card, I highlighted the way the author promotes the usage of Kanban boards to manage your portfolio and it visualizes some of the decisions to be taken.

Manage your portfolio (QRC, 171017) v1.0

To download: Manage your portfolio (QRC, 171017) v1.0

The author divided the book in fourteen chapters and these chapters gives you a step by step approach to build your portfolio. All chapters end with several situations and possible responses to try.

  1. Meet your project portfolio: It’s not your customer who cares about your portfolio. If you are facing issues to finish projects, if you want to deliver faster, more often and qualitative good products to your customer, you need a portfolio
  2. See your future: by managing your portfolio you make the organization’s choices transparent. It becomes clear what to work on first, second, third. It will help to avoid multitasking. What does it mean if you apply lean approaches to your project portfolio? You must think in terms of value, let teams work in small chunks that they can handle and complete
  3. Create the first draft of your portfolio: start collecting all the work before you attempt to evaluate and determine whether you need to do it now. When needed organize sets of projects into programs. Divide your projects in feature sets or Minimum Marketable Features. Not all feature sets are equally important.
  4. Evaluate your projects: the very first decision is about whether you want to commit to this project, kill the project, or transform the project in some way before continuing. If you don’t want to commit but you can’t kill it either put the project on a project parking lot (name, data, value discussion and notes) so you don’t lose track of it.
  5. Rank the portfolio: You can use many methods to rank. The author discusses the rank with Cost of Delay, business value points (divide a total number of points across your projects), by risk, organization’s context, by tour product’s position in the marketplace or by using pairwise comparison, single or double elimination.
  6. Collaborate on the portfolio: Making portfolio decisions is never a single person’s decision. Facilitate portfolio evaluation meetings.
  7. Iterate on the portfolio: Set an iteration length for your review cycles. This cycle length is affected by your project life cycle (agile delivery gives you the opportunity to have shorter review cycles), your product roadmap, and budgeting cycle.
  8. Make portfolio decisions: Conduct portfolio evaluation meetings at least quarterly to start with, decide how often to review the project parking lot. How are you going to cope with advanced R&D projects? Build a project portfolio Kanban (create backlog, evaluate, project work, assess/validate and maintain) to manage your portfolio.
  9. Visualize your project portfolio: Create a calendar view of your projects with predicted dates. Show not only your staffed projects but your unstaffed work too.
  10. Scaling portfolio management to an enterprise: What are the consequences of resource efficiency thinking (100% resource utilization is 0% flow)? How can you scale by starting bottom up or top down? You need both but scale with care. Do you know your enterprise’s mission or strategy otherwise it will be very difficult, if not impossible to make large decisions? Set up a corporate project portfolio meeting to answer the questions which projects help to implement our strategy and which project distract us from our strategy.
  11. Evolve your portfolio: Using lean can help you to evolve your portfolio approach. What does it mean if you stabilize the time-box or the number of work items in progress (see Naked planning video too).
  12. Measure the essentials: for a lean or agile approach consider the following measures: team’s velocity (current and historical), amount of work in progress (cycle and lead time, cumulative flow), obstacles preventing the team to move faster (how long in progress), product backlog burn-up chart, run rate. Never measure individual productivity.
  13. Define your mission: Brainstorm the essentials of a mission, refine the mission (specify strong verbs, eliminate adverbs, avoid jargon), iterate until you feel comfortable, test your mission, make the mission real for everyone.
  14. Start somewhere…but start!

Conclusion. Johanna Rothman wrote a must read for portfolio managers who are struggling with their role when their organization is moving towards more business agility, with more and more permanent agile teams in place but also for the traditional portfolio manager, facing too many projects and almost no delivery to get hands-on practical advice to start organizing their portfolios.

To order: Manage your portfolio Second edition – Increase your capacity – Finish more projects.

Naked Planning Overview by Arlo Belshee

Arlo was one of the first to lay-out the inspiration for Kanban systems for software development.







Read the whole story
drpratten
15 hours ago
reply
Sydney, Australia
Share this story
Delete

Artificial Intelligence Learns to Learn Entirely on Its Own

1 Share

A mere 19 months after dethroning the world’s top human Go player, the computer program AlphaGo has smashed an even more momentous barrier: It can now achieve unprecedented levels of mastery purely by teaching itself. Starting with zero knowledge of Go strategy and no training by humans, the new iteration of the program, called AlphaGo Zero, needed just three days to invent advanced strategies undiscovered by human players in the multi-millennia history of the game. By freeing artificial intelligence from a dependence on human knowledge, the breakthrough removes a primary limit on how smart machines can become.

Earlier versions of AlphaGo were taught to play the game using two methods. In the first, called supervised learning, researchers fed the program 100,000 top amateur Go games and taught it to imitate what it saw. In the second, called reinforcement learning, they had the program play itself and learn from the results.

AlphaGo Zero skipped the first step. The program began as a blank slate, knowing only the rules of Go, and played games against itself. At first, it placed stones randomly on the board. Over time it got better at evaluating board positions and identifying advantageous moves. It also learned many of the canonical elements of Go strategy and discovered new strategies all its own. “When you learn to imitate humans the best you can do is learn to imitate humans,” said Satinder Singh, a computer scientist at the University of Michigan who was not involved with the research. “In many complex situations there are new insights you’ll never discover.”

After three days of training and 4.9 million training games, the researchers matched AlphaGo Zero against the earlier champion-beating version of the program. AlphaGo Zero won 100 games to zero.

To expert observers, the rout was stunning. Pure reinforcement learning would seem to be no match for the overwhelming number of possibilities in Go, which is vastly more complex than chess: You’d have expected AlphaGo Zero to spend forever searching blindly for a decent strategy. Instead, it rapidly found its way to superhuman abilities.

The efficiency of the learning process owes to a feedback loop. Like its predecessor, AlphaGo Zero determines what move to play through a process called a “tree search.” The program starts with the current board and considers the possible moves. It then considers what moves its opponent could play in each of the resulting boards, and then the moves it could play in response and so on, creating a branching tree diagram that simulates different combinations of play resulting in different board setups.

AlphaGo Zero can’t follow every branch of the tree all the way through, since that would require inordinate computing power. Instead, it selectively prunes branches by deciding which paths seem most promising. It makes that calculation — of which paths to prune — based on what it has learned in earlier play about the moves and overall board setups that lead to wins.

Earlier versions of AlphaGo did all this, too. What’s novel about AlphaGo Zero is that instead of just running the tree search and making a move, it remembers the outcome of the tree search — and eventually of the game. It then uses that information to update its estimates of promising moves and the probability of winning from different positions. As a result, the next time it runs the tree search it can use its improved estimates, trained with the results of previous tree searches, to generate even better estimates of the best possible move.

The computational strategy that underlies AlphaGo Zero is effective primarily in situations in which you have an extremely large number of possibilities and want to find the optimal one. In the Nature paper describing the research, the authors of AlphaGo Zero suggest that their system could be useful in materials exploration — where you want to identify atomic combinations that yield materials with different properties — and protein folding, where you want to understand how a protein’s precise three-dimensional structure determines its function.

As for Go, the effects of AlphaGo Zero are likely to be seismic. To date, gaming companies have failed in their efforts to develop world-class Go software. AlphaGo Zero is likely to change that. Andrew Jackson, executive vice president of the American Go Association, thinks it won’t be long before Go apps appear on the market. This will change the way human Go players train. It will also make cheating easier.

As for AlphaGo, the future is wide open. Go is sufficiently complex that there’s no telling how good a self-starting computer program can get; and AlphaGo now has a learning method to match the expansiveness of the game it was bred to play.



Read the whole story
drpratten
16 hours ago
reply
Sydney, Australia
Share this story
Delete

The new dynamics of strategy: sense-making in a complex and complicated world

2 Shares

The new dynamics of strategy: Sense-making in a complex and complicated world Kurtz & Snowden et al., IBM Systems Journal, 2003

Tomorrow we’ll be taking a look at a paper recommended by Linda Rising during her keynote at GOTO Copenhagen earlier this month. Today’s choice provides the necessary background to the Cynefin (Kin-eh-vun) framework on which it is based. If I had to summarise Cynefin in one sentence I think it is this: one size does not fit all. And following that, I value the insight that sometimes we operate in domains where there isn’t a straightforward linear cause-and-effect relationship, although we like to act as though there is!

In addition to the paper, I found watching Dave Snowden’s keynote talk “Embrace Complexity, Scale Agility” (on YouTube) to be helpful.

The development of management science, from stop-watch-carrying Taylorists to business process reengineering, was rooted in the belief that systems were ordered; it was just a matter of time and resources before the relationships between cause and effect could be discovered… All of these approaches and perceptions do not accept that there are situations in which the lack of order is not a matter of poor investigation, inadequate resources, or lack of understanding, but is a priori the case — and not necessarily a bad thing, either.

Things we believe to be true that ain’t necessarily so

  1. There are underlying relationships between cause and effect in human interactions and markets, which are capable of discovery and empirical validation. Under this assumption, an understanding of casual links in past behaviour enables us to define best practice for future behaviour. There is a right or ideal way of doing things.
  2. Faced with a choice between one or more alternatives, human actors will make a rational decision. Under this assumption, individual and collective behaviour can be managed by manipulation of pain or pleasure outcomes, and through education to make those consequences evident.
  3. The acquisition of capability indicates an intention to use that capability. “We accept that we do things by accident, but assume that others do things deliberately.”

…although these assumptions are true within some contexts, they are not universally true … we are increasingly coming to deal with situations where these assumptions are not true, but the tools and techniques which are commonly available assume that they are.

Order and Un-order

There is order which we design and control, and there is also emergent order which can arise through the interaction of many entities. We looked at some emergent order based algorithms on The Morning Paper back in September 2015. This emergent order is still order, but of a different kind. In the paper, the authors call it ‘un-order’, in the spirit of the word ‘undead’. It Neither ordered in the traditional senses, nor disordered, but somewhere in-between.

… learning to recognize and appreciate the domain of un-order is liberating, because we can stop applying methods designed for order and instead focus on legitimate methods that work well in un-ordered situations.

In the world of the un-ordered, every intervention is also a diagnostic, and every diagnostic an intervention – any act changes the nature of the system.

The Cynefin framework

The Welsh word Cynefin has no direct translation into English, but seems to convey a deep sense of place. The Cynefin framework is designed to help you make sense of a situation and think about it in new ways. It looks a bit like a quadrant, but the authors are at pains to point out (a) there are five domains (including the area of disorder in the centre), and (b) there is no-one domain that is better than the others – this is not about trying to get to the top-right corner!

On the right-hand side we have ordered domains, and on the left-hand side are the un-ordered domains. In the middle is disorder. As of 2003, the domains were described as follows:

More recently, Dave Snowden has been describing them as “Obvious”, “Complicated”, “Complex”, and “Chaotic.”

Obvious

In the ‘obvious’ domain we truly do have known causes and effects where relationships are linear and generally not open to dispute. Here we can define standard operating procedures, use process reengineering, and generally define and incrementally improve best practice / ‘the right way.’ The decision model is to sense incoming data, categorize that data, and then respond in accordance with pre-determined practice.

Complicated

When things start to get complicated we have knowable causes and effects, but they may not be fully known to us. One thing that leads to this situation is causes and effects separated over time and space in chains that are difficult to fully understand. In the complicated domain, we often rely on expert opinion.

This is the domain of systems thinking, the learning organization, and the adaptive enterprise, all of which are too often confused with complexity theory… This is the domain of methodology, which seeks to identify cause-effect relationships through the study of properties which appear to be associated with qualities.

The decision model in the complicated domain is to sense incoming data, analyze that data, and then respond in accordance with expert advice or analysis interpretation. Entrained patterns are dangerous here, as a simple error in an assumption can lead to a false conclusion that is hard to isolate and difficult to spot.

Complex

When we move from the merely complicated to the complex, we’re entering the world of un-order. This is the domain of complexity theory. Emergent patterns can be perceived, but not predicted, a phenomenon called retrospective coherence. And this combination of perception without the ability to predict can get us into all sorts of troubles if we confuse the two:

In this space, structured methods that seize upon such retrospectively coherent patterns and codify them into procedures will confront only new and different patterns for which they are ill prepared. Once a pattern has stabilized, its path appears logical, but in only one of many that could have stabilized, each of which would also have appeared logical in retrospect.

If we rely on expert opinions, case studies, business books claiming to have found ‘the answer’ and so on in this domain then we will continue to be surprised by new and unexpected patterns. In the complex domain then, the decision model is to create probes to make the patterns or potential patterns more visible before we take any action. If we sense desirable patterns we can respond by stabilising them. Likewise we can destabilise any patterns we don’t want. To encourage the establishment of healthy patterns, we can seed the space in such a way that patterns we want are more likely to emerge (in the YouTube talk I reference earlier, Dave refers to these as attractors).

Understanding this space requires us to gain multiple perspectives on the nature of the system. This is the time to “stand still” (but pay attention) and gain new perspective on the situation… The methods, tools, and techniques of the known (obvious) and knowable (complicated) domains do not work here. Narrative techniques are particularly powerful in this space.

Chaotic

In the chaotic domain there are no perceivable cause and effect relations and the system is turbulent. In this domain there is nothing to analyse, and no patterns to emerge. The decision model here is to act, quickly, and decisively, to reduce the turbulence. Then we can sense the reaction and respond accordingly.

The trajectory of our intervention will differ according to the nature of the space. We may use an authoritarian intervention to control the space and make it knowable or known; or we may need to focus on multiple interventions to create new patterns and thereby move the situation into the complex space.

Disorder

The central domain of disorder is where we do not understand which of the other domains we are in.

Cynefin and software development

Drawing some material from Dave Snowden’s Agile India keynote, we can see how the approach to building software changes in each of these domains. (I’m reminded of some of Simon Wardley’s writings on one-size development methodologies not fitting all).

In an obvious domain, we could use strong process and even a waterfall. In a complicated domain though, we want to use a more iterative process giving us opportunities to analyse and respond. Complex domains suit rapid construction of many prototypes (maybe even in parallel) to see what works. When in chaos, spike!

Moving between domains

A lot of the writing on Cynefin focuses on ‘categorizing’ systems into one of the domains just described. But in the paper, the authors place a lot of emphasis on the transitions that occur between domains.

When people use the Cynefin framework, the way they think about moving between domains is as important as the way they think about the domain they are in, because a move across boundaries requires a shift to a different model of understanding and interpretation as well as a different leadership style.

The paper describes a series of cross-boundary movements and flows, for example:

See the full paper for further details.

There are also background movements at work. The forces of the past tend to cause a clockwise drift in the Cynefin space (ossification). The forces of the future push things in a counter-clockwise direction:

…the death of people and obsolescence of roles cause what is known to be forgotten and require seeking; new generations filled with curiosity begin new explorations that question the validity of established patterns; the energy of youth breaks the rules and brings radical shifts in power and perspective; and sometimes imposition of order is the result.

The two forces pull society in both directions at once: “the old guard is forgotten at the same time that its beliefs affect newcomers in ways they cannot see.”











Read the whole story
drpratten
2 days ago
reply
Sydney, Australia
Share this story
Delete

Falling through the KRACKs

1 Share

The big news in crypto today is the KRACK attack on WPA2 protected WiFi networks. logo-smallDiscovered by Mathy Vanhoef and Frank Piessens at KU Leuven, KRACK (Key Reinstallation Attack) leverages a vulnerability in the 802.11i four-way handshake in order to facilitate decryption and forgery attacks on encrypted WiFi traffic.

The paper is here. It’s pretty easy to read, and you should.

I don’t want to spend much time talking about KRACK itself, because the vulnerability is pretty straightforward. Instead, I want to talk about why this vulnerability continues to exist so many years after WPA was standardized. And separately, to answer a question: how did this attack slip through, despite the fact that the 802.11i handshake was formally proven secure?

A quick TL;DR on KRACK

For a detailed description of the attack, see the KRACK website or the paper itself. Here I’ll just give a brief, high level description.

The 802.11i protocol (also known as WPA2) includes two separate mechanisms to ensure the confidentiality and integrity of your data. The first is a record layer that encrypts WiFi frames, to ensure that they can’t be read or tampered with. This encryption is (generally) implemented using AES in CCM mode, although there are newer implementations that use GCM mode, and older ones that use RC4-TKIP (we’ll skip these for the moment.)

The key thing to know is that AES-CCM (and GCM, and TKIP) is a stream cipher, which means it’s vulnerable to attacks that re-use the same key and “nonce”, also known as an initialization vector. 802.11i deals with this by constructing the initialization vector using a “packet number” counter, which initializes to zero after you start a session, and always increments (up to 2^48, at which point rekeying must occur). This should prevent any nonce re-use, provided that the packet number counter can never be reset.

The second mechanism you should know about is the “four way handshake” between the AP and a client (supplicant) that’s responsible for deriving the key to be used for encryption. The particular message KRACK cares about is message #3, which causes the new key to be “installed” (and used) by the client.

393px-4-way-handshake-svg
I’m a four-way handshake. Client is on the left, AP is in the right. (courtesy Wikipedia, used under CC).

The key vulnerability in KRACK (no pun intended) is that message #3 can be blocked by adversarial nasty people. When this happens, the AP re-transmits this message, which causes (the same) key to be reinstalled into the client (note: see update below). This doesn’t seem so bad. But as a side effect of installing the key, the packet number counters all get reset to zero. (And on some implementations like Android 6, the key gets set to zero — but that’s another discussion.)

The implication is that by forcing the AP to replay this message, an adversary can cause a connection to reset nonces and thus cause keystream re-use in the stream cipher. With a little cleverness, this can lead to full decryption of traffic streams. And that can lead to TCP hijacking attacks. (There are also direct traffic forgery attacks on GCM and TKIP, but this as far as we go for now.)

How did this get missed for so long?

If you’re looking for someone to blame, a good place to start is the IEEE. To be clear, I’m not referring to the (talented) engineers who designed 802.11i — they did a pretty good job under the circumstances. Instead, blame IEEE as an institution.

One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access. Go ahead and google for the IETF TLS or IPSec specifications — you’ll find detailed protocol documentation at the top of your Google results. Now go try to Google for the 802.11i standards. I wish you luck.

The IEEE has been making a few small steps to ease this problem, but they’re hyper-timid incrementalist bullshit. There’s an IEEE program called GET that allows researchers to access certain standards (including 802.11) for free, but only after they’ve been public for six months — coincidentally, about the same time it takes for vendors to bake them irrevocably into their hardware and software.

This whole process is dumb and — in this specific case — probably just cost industry tens of millions of dollars. It should stop.

The second problem is that the IEEE standards are poorly specified. As the KRACK paper points out, there is no formal description of the 802.11i handshake state machine. This means that implementers have to implement their code using scraps of pseudocode scattered around the standards document. It happens that this pseudocode leads to the broken implementation that enables KRACK. So that’s bad too.

And of course, the final problem is implementers. One of the truly terrible things about KRACK is that implementers of the WPA supplicant (particularly on Linux) managed to somehow make Lemon Pledge out of lemons. On Android 6 in particular, replaying message #3 actually sets an all-zero key. There’s an internal logic behind why this happens, but Oy Vey. Someone actually needs to look at this stuff.

What about the security proof?

The fascinating thing about the 802.11i handshake is that despite all of the roadblocks IEEE has thrown in people’s way, it (the handshake, at least) has been formally analyzed. At least, for some definition of the term.

(This isn’t me throwing shade — it’s a factual statement. In formal analysis, definitions really, really matter!)

A paper by He, Sundararajan, Datta, Derek and Mitchell (from 2005!) looked at the 802.11i handshake and tried to determine its security properties. What they determined is that yes, indeed, it did produce a secret and strong key, even when an attacker could tamper with and replay messages (under various assumptions). This is good, important work. The proof is hard to understand, but this is par for the course. It seems to be correct.

wifihandshake
Representation of the 4-way handshake from the paper by He et al. Yes, I know you’re like “what?“. But that’s why people who do formal verification of protocols don’t have many friends.

Even better, there are other security proofs showing that — provided the nonces are never repeated — encryption modes like CCM and GCM are highly secure. This means that given a secure key, it should be possible to encrypt safely.

So what went wrong?

The critical problem is that while people looked closely at the two components — handshake and encryption protocol — in isolation, apparently nobody looked closely at the two components as they were connected together. I’m pretty sure there’s an entire geek meme about this.

czx0o-twqaaeali
Two unit tests, 0 integration tests, thanks Twitter.

Of course, the reason nobody looked closely at this stuff is that doing so is just plain hard. Protocols have an exponential number of possible cases to analyze, and we’re just about at the limit of the complexity of protocols that human beings can truly reason about, or that peer-reviewers can verify. The more pieces you add to the mix, the worse this problem gets.

In the end we all know that the answer is for humans to stop doing this work. We need machine-assisted verification of protocols, preferably tied to the actual source code that implements them. This would ensure that the protocol actually does what it says, and that implementers don’t further screw it up, thus invalidating the security proof.

This needs to be done urgently, but we’re so early in the process of figuring out how to do it that it’s not clear what it will take to make this stuff go live. All in all, this is an area that could use a lot more work. I hope I live to see it.

===

Update: An early version of this post suggested that the attacker would replay the message. Actually, the paper describes forcing the AP to resend it by blocking it from being received at the client. Thanks to Nikita Borisov for the fix.












Read the whole story
drpratten
2 days ago
reply
Sydney, Australia
Share this story
Delete

Neutron-Star Collision Shakes Space-Time and Lights Up the Sky

3 Shares

On Aug. 17, the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) detected something new. Some 130 million light-years away, two super-dense neutron stars, each as small as a city but heavier than the sun, had crashed into each other, producing a colossal convulsion called a kilonova and sending a telltale ripple through space-time to Earth.

When LIGO picked up the signal, the astronomer Edo Berger was in his office at Harvard University suffering through a committee meeting. Berger leads an effort to search for the afterglow of collisions detected by LIGO. But when his office phone rang, he ignored it. Shortly afterward, his cellphone rang. He glanced at the display to discover a flurry of missed text messages:

Edo, check your email!

Pick up your phone!

“I kicked everybo­dy out that very moment and jumped into action,” Berger said. “I had not expected this.”

LIGO’s pair of ultrasensitive detectors in Louisiana and Washington state made history two years ago by recording the gravitational waves coming from the collision of two black holes — a discovery that earned the experiment’s architects the Nobel Prize in Physics this month. Three more signals from black hole collisions followed the initial discovery.

Yet black holes don’t give off light, so making any observations of these faraway cataclysms beyond the gravitational waves themselves was unlikely. Colliding neutron stars, on the other hand, produce fireworks. Astronomers had never seen such a show before, but now LIGO was telling them where to look, which sent teams of researchers like Berger’s scurrying to capture the immediate aftermath of the collision across the full range of electromagnetic signals. In total, more than 70 telescopes swiveled toward the same location in the sky.

They struck the motherlode. In the days after Aug. 17, astronomers made successful observations of the colliding neutron stars with optical, radio, X-ray, gamma-ray, infrared and ultraviolet telescopes. The enormous collaborative effort, detailed today in dozens of papers appearing simultaneously in Physical Review Letters, Nature, Science, Astrophysical Journal Letters and other journals, has not only allowed astrophysicists to piece together a coherent account of the event, but also to answer longstanding questions in astrophysics.

“In one fell swoop, gravitational wave measurements” have opened “a window onto nuclear astrophysics, neutron star demographics and physics and precise astronomical distances,” said Scott Hughes, an astrophysicist at the Massachusetts Institute of Technology’s Kavli Institute for Astrophysics and Space Research. “I can’t describe in family-friendly words how exciting that is.”

Today, Berger said, “will go down in the history of astronomy.”

X Marks the Spot

When Berger got the calls, emails, and the automated official LIGO alert with the probable coordinates of what appeared to be a neutron-star merger, he knew that he and his team had to act quickly to see its aftermath using optical telescopes.

The timing was fortuitous. Virgo, a new gravitational-wave observatory similar to LIGO’s two detectors, had just come online in Europe. The three gravitational-wave detectors together were able to triangulate the signal. Had the neutron-star merger occurred a month or two earlier, before Virgo started taking data, the “error box,” or area in the sky that the signal could have come from, would have been so large that follow-up observers would have had little chance of finding anything.

The LIGO and Virgo scientists had another stroke of luck. Gravitational waves produced by merging neutron stars are fainter than those from black holes and harder to detect. According to Thomas Dent, an astrophysicist at the Albert Einstein Institute in Hannover, Germany, and a member of LIGO, the experiment can only sense neutron-star mergers that occur within 300 million light-years. This event was far closer — at a comfortable distance for both LIGO and the full range of electromagnetic telescopes to observe it.

But at the time, Berger and his colleagues didn’t know any of that. They had an agonizing wait until sunset in Chile, when they could use an instrument called the Dark Energy Camera mounted on the Victor M. Blanco telescope there. The camera is great when you don’t know precisely where you’re looking, astronomers said, because it can quickly scan a very large area of the sky. Berger also secured use of the Very Large Array (VLA) in central New Mexico, the Atacama Large Millimeter Array (ALMA) in Chile and the space-based Chandra X-ray Observatory. (Other teams that received the LIGO alert asked to use VLA and ALMA as well.)

A few hours later, data from the Dark Energy Camera started coming in. It took Berger’s team 45 minutes to spot a new bright light source. The light appeared to come from a galaxy called NGC 4993 in the constellation Hydra that had been pointed out in the LIGO alert, and at approximately the distance where LIGO had suggested they look.

“That got us really excited, and I still have the email from a colleague saying ‘Holy , look at that bright source near this galaxy!’” Berger said. “All of us were kind of shocked,” since “we didn’t think we would succeed right away.” The team had expected a long slog, maybe having to wade through multiple searches after LIGO detections for a couple of years until eventually spotting something. “But this just stood out,” he said, “like when an X marks the spot.”

Meanwhile, at least five other teams discovered the new bright light source independently, and hundreds of researchers made various follow-up observations. David Coulter, an astronomer at University of California, Santa Cruz, and colleagues used the Swope telescope in Chile to pinpoint the event’s exact location, while Las Cumbres Observatory astronomers did so with the help of a robotic network of 20 telescopes around the globe.

For Berger and the rest of the Dark Energy Camera follow-up team, it was time to call in the Hubble Space Telescope. Securing time on the veteran instrument usually takes weeks, if not months. But for extraordinary circumstances, there’s a way to jump ahead in line, by using “director’s discretionary time.” Matt Nicholl, an astronomer at the Harvard-Smithsonian Center for Astrophysics, submitted a proposal on behalf of the team to take ultraviolet measurements with Hubble — possibly the shortest proposal ever written. “It was two paragraphs long — that’s all we could do in the middle of the night,” Berger said. “It just said that we’ve found the first counterpart of a binary neutron star merger, and we need to get UV spectra. And it got approved.”

As the data trickled in from the various instruments, the collected data set was becoming more and more astounding. In total, the original LIGO/Virgo discovery and the various follow-up observations by scientists have yielded dozens of papers, each describing astrophysical processes that occurred during and after the merger.

Mystery Bursts

Neutron stars are compact neutron-packed cores left over when massive stars die in supernova explosions. A teaspoon of neutron star would weigh as much as one billion tons. Their internal structure is not completely understood. Neither is their occasional aggregation into close-knit binary pairs of stars that orbit each other. The astronomers Joe Taylor and Russell Hulse found the first such pair in 1974, a discovery that earned them the 1993 Nobel Prize in Physics. They concluded that those two neutron stars were destined to crash into each other in about 300 million years. The two stars newly discovered by LIGO took far longer to do so.

The analysis by Berger and his team suggests that the newly discovered pair was born 11 billion years ago, when two massive stars went supernova a few million years apart. Between these two explosions, something brought the stars closer together, and they went on circling each other for most of the history of the universe. The findings are “in excellent agreement with the models of binary-neutron-star formation,” Berger said.

The merger also solved another mystery that has vexed astrophysicists for the past five decades.

On July 2, 1967, two United States satellites, Vela 3 and 4, spotted a flash of gamma radiation. Researchers first suspected a secret nuclear test conducted by the Soviet Union. They soon realized this flash was something else: the first example of what is now known as a gamma ray burst (GRB), an event lasting anywhere from milliseconds to hours that “emits some of the most intense and violent radiation of any astrophysical object,” Dent said. The origin of GRBs has been an enigma, although some people have suggested that so-called “short” gamma-ray bursts (lasting less than two seconds) could be the result of neutron-star mergers. There was no way to directly check until now.

In yet another nod of good fortune, it so happened that on Aug. 17, the Fermi Gamma-Ray Space Telescope and the International Gamma-Ray Astrophysics Laboratory (Integral) were pointing in the direction of the constellation Hydra. Just as LIGO and Virgo detected gravitational waves, the gamma-ray space telescopes picked up a weak GRB, and, like LIGO and Virgo, issued an alert.

A neutron star merger should trigger a very strong gamma-ray burst, with most of the energy released in a fairly narrow beam called a jet. The researchers believe that the GRB signal hitting Earth was weak only because the jet was pointing at an angle away from us. Proof arrived about two weeks later, when observatories detected the X-ray and radio emissions that accompany a GRB.This provides smoking-gun proof that normal short gamma-ray bursts are produced by neutron-star mergers,” Berger said. “It’s really the first direct compelling connection between these two phenomena.”

Hughes said that the observations were the first in which “we have definitively associated any short gamma-ray burst with a progenitor.” The findings indicate that at least some GRBs come from colliding neutron stars, though it’s too soon to say whether they all do.

Striking Gold

Optical and infrared data captured after the neutron-star merger also help clarify the formation of the heaviest elements in the universe, like uranium, platinum and gold, in what’s called r-process nucleosynthesis. Scientists long believed that these rare, heavy elements, like most other elements, are made during high-energy events such as supernovas. A competing theory that has gained prominence in recent years argues that neutron-star mergers could forge the majority of these elements. According to that thinking, the crash of neutron stars ejects matter in what’s called a kilonova. “Once released from the neutron stars’ gravitational field,” the matter “would transmute into a cloud full of the heavy elements we see on rocky planets like Earth,” Dent explained.

Optical telescopes picked up the radioactive glow of these heavy elements — strong evidence, scientists say, that neutron-star collisions produce much of the universe’s supply of heavy elements like gold.

“With this merger,” Berger said, “we can see all the expected signatures of the formation of these elements, so we are solving this big open question in astrophysics of how these elements form. We had hints of this before, but here we have a really nearby object with exquisite data, and there is no ambiguity.” According to Daniel Holz, an astrophysicist at the University of Chicago, “back-of-the-envelope calculations indicate that this single collision produced an amount of gold greater than the weight of the Earth.”

The scientists also inferred a sequence of events that may have followed the neutron-star collision, providing insight into the stars’ internal structure. Experts knew that the collision outcome “depends very much on how large the stars are and how ‘soft’ or ‘springy’ — in other words, how much they resist being deformed by super-strong gravitational forces,” Dent said. If the stars are extra soft, they may immediately be swallowed up inside a newly formed black hole, but this would not leave any matter outside to produce a gamma-ray burst. “At the other end of the scale, he said, “the two neutron stars would merge and form an unstable, rapidly spinning super-massive neutron star, which could produce a gamma-ray burst after a holdup of tens or hundreds of seconds.”

The most plausible case may lie somewhere in the middle: The two neutron stars may have merged into a doughnut-shaped unstable neutron star that launched a jet of super-energetic hot matter before finally collapsing as a black hole, Dent said.

Future observations of neutron-star mergers will settle these questions. And as the signals roll in, experts say the mergers will also serve as a precision tool for cosmologists. Comparing the gravitational-wave signal with the redshift, or stretching, of the electromagnetic signals offers a new way of measuring the so-called Hubble constant, which gives the age and expansion rate of the universe. Already, with this one merger, researchers were able to make an initial measurement of the Hubble constant “in a remarkably fundamental way, without requiring the multitude of assumptions” that go into estimating the constant by other methods, said Matthew Bailes, a member of the LIGO collaboration and a professor at the Swinburne University of Technology in Australia. Holz described the neutron star merger as a “standard siren” (in a nod to the term “standard candles” used for supernovas) and said that initial calculations suggest the universe is expanding at a rate of 70 kilometers per second per megaparsec, which puts LIGO’s Hubble constant “smack in the middle of estimates.”

To improve the measurement, scientists will have to spot many more neutron-star mergers. Given that LIGO and Virgo are still being fine-tuned to increase their sensitivity, Berger is optimistic. “It is clear that the rate of occurrence is somewhat higher than expected,” he said. “By 2020 I expect at least one to two of these every month. It will be tremendously exciting.”



Read the whole story
drpratten
2 days ago
reply
Sydney, Australia
Share this story
Delete

Is Haskell the right language for teaching functional programming principles?

2 Shares

No! (As Simon Thompson explains.)
You cannot not love the "exploration of the length function" at the bottom. Made me smile in the middle of running errands.

Read the whole story
drpratten
2 days ago
reply
Sydney, Australia
Share this story
Delete
Next Page of Stories