David Pratten is passionate about leading IT-related change projects for social good. 1963 stories · 0 followers

## The Digital CFOby Mark Schwartz Monday December 17th, 2018 at 10:47 PM

1 Share

As we navigate the changes brought about by the digital revolution, we can see that the role of the enterprise CFO is also changing. Whether the one is the cause of the other, and which is cause and which effect, I’m not so sure. But the digital world certainly requires us to think very differently about what a CFO is, and what role the CFO plays in the enterprise.

I’m not the first to notice this. Many articles in CFO journals, reports by consulting firms, and conference seminars have hinted at or described these changes. In this blog post I will try to combine all of their thoughts into a coherent way of characterizing the digital CFO, and I’ll add the AWS Enterprise Strategy angle to it. I’ll do so by comparing the “old” and the “new” ways of defining what a CFO does. These are very much generalizations—different CFOs and their enterprises have treated the role differently. When you put all of these elements together, though, it is clear that the CFO has become central to driving the digital performance of the enterprise.

Old: Backward Looking, Trailing Indicators, Financial Metrics
New: Forward Looking, Leading Indicators, Performance Metrics

Traditionally, CFOs have focused much of their attention on financial reporting, which means, in a sense, that they have been looking in the rearview mirror. A financial report summarizes performance over a previous period, or, as with the balance sheet, summarizes the state of the business at the end of a period.

For today’s CFO, of course, financial reporting is still critical. But technology today makes it possible for the finance organization to get near real-time information about the company’s performance, and even to gather data on leading indicators—those that show how the company will perform in the future. As Jeannette Wade, CFO of the Office of Technology Services & Security, says, “The role of the CFO has changed from being the keeper of the financial information to the driver of business change with financial information.”[1] A variety of business intelligence and analytical tools give the CFO the ability to slice and dice this information, even to perform predictive analytics through new machine learning technologies. And data lakes and real-time streaming make it easier to analyze information across business silos, business units, and even subsidiary companies to gain a complete picture. The digital CFO takes advantage of all of these tools to make decisions and give the other CXOs insight into company performance.

Old: Quarterly Performance
New: Company Sustainability (and quarterly performance)

As the steward for the board and the shareholders, the CFO must make sure that the company is future-ready and sustainable. Boards are increasingly worried about company sustainability as they see industries being disrupted and observe how precipitously companies drop from industry leadership positions. Company sustainability appears to involve a paradox: the company must continue to innovate and grow, but at the same time must guard against risks to its core business as well as security breaches and market uncertainties. The one certain thing is that enterprises cannot sustain their market leadership position through stasis, by maintaining the status quo, or by managing only for today or this quarter or this fiscal year.

Old: Cost Oriented
New: Growth Oriented

Eighty-one percent of CFOs believe it is their responsibility to identify and target new growth areas across the enterprise. CFOs must find and cultivate new sources of growth.[2] Following from my previous point, the CFO cannot just focus on controlling and reducing costs. Maintaining a market leadership position in the current economy requires continuous innovation and seizing market opportunities as they arise. Boards and CFOs—not to mention the financial markets—demand growth, and the CFO must find a way to deliver it, which means spotting growth opportunities and directing cash and other resources toward them. The CFO cannot be a “no-sayer;” on the contrary, good financial stewardship means encouraging and supporting innovation and the spending required for growth.

Old: Focused on Cost
New: Focused on Leanness

This subtlety makes all the difference in the digital world, which is a world of speed, responsiveness, and sustained feedback cycles. In order to foster innovation, seize opportunities faster than competitors, and respond effectively to customer needs, the enterprise needs to be fast. By “fast” I mean that lead times must be short, and in the Lean Manufacturing/Toyota Production System paradigm, this means removing waste that contributes to long lead times. That waste might not be in obvious places—it is often in handoffs between business siloes, bureaucracy and redundant or ineffective controls, poor hiring practices, or wasteful feature bloat in IT systems (that is, “requirements” that shouldn’t be required). Fortunately, reducing waste also reduces cost, or at least unit cost.

Old: Vetting Innovation, Guarding Funds and Resources
New: Fostering Innovation, Channeling Funds and Resources

Is the CFO one of many gatekeepers and guardians in the organization who “vet” innovations by saying “no” to most of them? Or does the CFO help the company exploit potential growth opportunities by encouraging the right kind of innovation? In a company with a growth imperative, it must be the latter.

Old: Focused on Plans and Milestones
New: Focused on Outcomes

In the old world, the CFO (and everyone else in the enterprise) focused on plans and milestone adherence as a way to (ostensibly) control them and manage risk. But in an environment of uncertainty, plans must change often; and in any case, it is not the plan that is important—it is the outcome. In a world of short lead times and fast delivery (especially from IT), outcomes can be measured quickly and used as a control. The question should never be “Did you complete all the requirements and execute according to plan?” but rather “Are you generating valuable outcomes quickly?”

Old: Deploying Capital Based on Business Cases
New: Deploying Capital Based on Staged Investments

Business cases prepared in advance of an investment are subject to uncertainty, complexity, and rapid change. It is risky to invest in a business case in times of disruption and volatility. Fortunately, the digital world gives CFOs a better way to manage investment risk, and that is through incremental or staged (or metered) investment. Instead of committing funding to a large initiative, the enterprise should commit funding in stages. IT teams can and should deliver results quickly and frequently (that is what DevOps is all about), and the business results of these deliveries should be scrutinized as a way of deciding whether to continue funding an initiative and at what level. It is even possible to make small investments in a portfolio of opportunities and then decide which will yield the best returns before deciding which ones to commit funding to.

New: Creative Destruction and Continuous Reinvention

It has become very difficult to sustain a competitive advantage. Barriers to entry are low. Customers are fickle. Government rules and policies change, as does the geopolitical situation. Distribution channels are disintermediated. Disruption is rife. It may not be wise to depend on today’s competitive advantage to still exist when you wake up tomorrow. Instead, companies compete today based on continuous innovation, frequent renovation of business models, and creative destruction of existing advantages. A CFO is looking to create change, not to avoid it.

Old: Supporting Siloed Activity
New: Supporting Cross-Functional Activity

Old-style budgeting and performance management were based on functionally siloed organizations. Marketing was responsible for marketing objectives and costs; IT for IT objectives and costs; business units for their own P&Ls. But todays organizational practices are based on cross-functional teams, the more widely representative of different functions, the better. The CFO must foster or create transparency across organizational silos, and must find a way to fund and measure the performance of teams that cut across traditional organizational boundaries.

Old: CAPEX and Fixed Costs, Total Costs
New: OPEX and Variable Costs, Marginal Costs

As I explain in another post, Decisions at the Margins, enterprises can make better decisions in the digital world if they focus on marginal costs and marginal value, rather than total or average. The costs to operate a digital service are now typically variable costs and often represent operational expense rather than capital expense. The cloud, for example, turns what used to be large, fixed costs into small, variable costs (see my post on Micro-Optimization). Often, these costs are expensed rather than capitalized. DevOps breaks down monolithic IT system delivery costs into small, incremental costs to deliver individual features or microservices. This is one reason for lowered barriers to entry for startups: they no longer need large investments before they start doing business; instead, they can see their costs rise only as the business scales. Enterprises, too, can take advantage of this change to cost structure.

Old: IT as a Cost Center
New: IT as a Partner and Enabler

IT supports and enables the CFO in accomplishing all of the changes I have discussed in this post. Working with IT, the CFO can bring transparency across silos, obtain near real-time data and analytics, reduce lead times, and divide investments into low-risk chunks. But more than this, IT is an enabler of business growth and a driver of innovation, rather than just a fixed cost the enterprise must bear and try to minimize.

Old: Finance-Operations Focused
New: Cross-Enterprise Operations Focused

According to a McKinsey study, two-thirds of CFOs think they should spend less time on traditional finance activities and more on strategic leadership.[3] BCG found that about 30% of the finance department’s effort is invested just in the mechanics of assembling data and resolving inconsistencies.[4] CFOsand their finance groups have spent entirely too much time on the mechanics of getting the books closed each month. The CFO is the CFO for the entire organization!

Old: Risk is Unpredictability and Financial Risk
New: Risk is Predictability and Business Risk

The CFO manages risk for the enterprise, and it is critical that the CFO teach the enterprise to think differently about risk. The Status Quo Bias (see my other blog post) leads us to perceive risk in things that are new, while overlooking the risks in the old. (Should you be worried more about the security of new things or the problems in your security posture today?) It leads us to focus on cost and schedule risk when the real risk is the risk of not gaining the desired outcome at a good price quickly. It makes us think that innovations are risky whereas not innovating is even riskier. It convinces us that it is risky not to follow an upfront plan and business case even when the environment is constantly changing and new information is being discovered. The CFO must bring to the enterprise a rational basis for making risk decisions.

Old: Compliance is a Checklist, Reactive Compliance
New: Compliance is a Design, Proactive Compliance

The GDPR is a model for how we should think about compliance going forward. It requires that we implement “privacy by design”—in other words, that we design customer privacy into our systems and processes as we create them. This is indeed the way IT organizations increasingly think about security, resilience, and privacy—all IT systems should be designed for “ruggedness,” or for the way they will be used and operated. It is not enough to make customer information secure today; tomorrow a hacker will invent a new way to steal it. Similar considerations apply to other compliance frameworks—HIPPA, Sarbanes Oxley, FISMA. Instead of auditing after the fact whether they have the appropriate controls, we need to design systems and processes so that they will meet the control objectives in a way that is resilient and future-oriented.

In summary:

 Old New Backward Looking, Trailing Indicators, Financial Metrics Forward Looking, Leading Indicators, Performance Metrics Quarterly Performance Company Sustainability (andquarterly performance) Cost-Oriented Growth-Oriented Focused on Cost Focused on Leanness Vetting Innovation, Guarding Funds and Resources Fostering Innovation, Channeling Funds and Resources Focused on Plans and Milestones Focused on Outcomes Deploying Capital Based on Business Cases Deploying Capital Based on Staged Investments Sustainable Competitive Advantage Creative Destruction and Continuous Reinvention Supporting Siloed Activity Supporting Cross-Functional Activity CAPEX and Fixed Costs, Total Costs OPEX and Variable Costs, Marginal Costs IT as a Cost Center IT as a Partner and Enabler Finance-Operations Focused Cross-Enterprise Operations Focused Risk is Unpredictability and Financial Risk Risk is Predictability and Business Risk Compliance is a Checklist, Reactive Compliance Compliance is a Design, Proactive Compliance

Mark

[1] https://channels.theinnovationenterprise.com/articles/expert-interview-jeanette-wade-cfo-executive-office-of-technology-services-security.This and the other references below are also cited in my upcoming book, War & Peace & IT: Business Leadership, Technology, and Success in the Digital Age.

drpratten
1 day ago
Sydney, Australia

## Micro-Optimization: Activity-Based Costing for Digital Services?by Mark Schwartz Monday December 17th, 2018 at 2:01 PM

1 Share

As I have argued in my other posts, the digital world is bringing a radical change in the role of the CFO. For one thing, the cloud and DevOps are turning fixed costs into variable costs. For another, rapid iteration and delivery make it possible—and essential—for us to make financial decisions based on marginal costs and marginal returns rather than on total or average costs.

Now, a final piece to the puzzle: with the cloud, we can break down even a single digital transaction into its component costs and then work to both optimize those costs and gain insight into our unit economics—not just on a customer-by-customer or transaction-by-transaction basis, but on a digital operation-by-digital operation basis within a transaction. The implications, as I will show, are substantial for finance and IT management.

The first clue I got to this new possibility was in conversation with two of AWS’s serverless heroes: Aleksandar Simovic and Slobodan Stojanovic. We were talking about the implications of serverless computing, which we agreed went deeper than most people realize. If you are not familiar with serverless, a brief explanation is in order here. Traditionally, computing infrastructure consisted of physical server computers in a datacenter. These were fixed costs—step costs, really—where each server could handle a certain amount of transaction volume, and then had to be replaced or upgraded when the volume of the company’s transactions grew beyond its capabilities. With the cloud, a company can pay for “virtual” servers managed by AWS, with a tremendous increase in flexibility—they can add or subtract servers instantly, or change the server’s processing power as needed for their ever-changing business.

But with serverless computing through AWS Lambda, the company gains an order of magnitude more flexibility. There is no longer a need for the company to provision virtual servers. Instead, it simply designates what code should be executed and under what conditions, and AWS deals with everything else behind the scenes. The company pays only for the execution of the code. “Compute” is essentially a utility, like electricity—flip a switch and it is on; flip a switch and it is off. You only pay for the electricity when you turn it on—let’s say to generate light so that you can read our blog posts.

Now think about how this changes the economic picture. The code executes only when a transaction is being processed, and the cost depends only on the execution of the code. In other words, cost is truly variable—by transaction.

Next, consider that each customer transaction is typically handled by multiple pieces of code, called microservices. For example, when you buy something online, your transaction is probably delegated to a microservice that looks up your customer information, another that checks inventory, another that processes your credit card payment, and another that tells the warehouse to pick and package the item. Some of these microservices might have been created by your technologists in-house; some might be services provided by third parties. Each of these microservices uses serverless compute—and you can find out the cost of executing that microservice.

Putting all these pieces together you can say that a digital transaction requires a number of activities, and you can break the cost of a transaction into the cost of its component activities. You can do activity-based costing, that is, even for things that are internal to a computing infrastructure!

Why would you want to? Well, first, you can get a very detailed sense of your unit economics and then you can make decisions that will improve what is probably a growing cost area of your business. Not only can you look at customer profitability, but you can look at transaction profitability and even profitability of the features within a transaction. You can use this information to set pricing, make decisions about what features to offer in a product, and determine what features to promote.

You can also consider alternative ways of structuring the digital services that make up your transaction. Will it be less expensive to stop using a third-party microservice component and instead develop an equivalent in-house? Before, you could only make this decision by comparing the fixed cost of developing it against your licensing fees. But that missed an increasingly important cost element: the cost of infrastructure or computing power. You can now bring that cost into your build versus buy decision.

Your technologists can also use the information to choose between different services that you can buy, or to make choices between different technical designs they can use as they are architecting the system. You can make granular decisions that—when you are operating at high volume—might make all the difference in cost and profitability.

Perhaps most importantly, you can better delegate responsibility for cost and profitability to the teams that are developing and managing the digital service. They can be responsible not just for the cost of developing the service—which made more sense when we thought of the service as just a simple capital asset that never thereafter changed—but for the costs of operating the service as well. I talk further about this economic accountability in another post on FinOps. The importance of making decisions at the margins—that is, in terms of marginal costs and marginal revenues—in my post on Marginal Decisions.

The idea of accounting for and making decisions based upon these micro-costs at high volume is real. Aleksandar Simovic has developed an elegant  software tool called CapitalFlow that report on this micro-cost information in a way that enterprises can use. You can take a look at it here: https://capitalflow.ai.

Mark

drpratten
1 day ago
Sydney, Australia

## Introducing FinOps—Excuse Me, DevSecFinBizOpsby Mark Schwartz Monday December 17th, 2018 at 1:59 PM

1 Share

Who is accountable for managing the costs of a digital service?

Enterprises are increasingly pushing accountability outward to cross-functional teams. There is an important reason for this: teams can make decisions and put them into action quickly, and speed is important. By putting a cross-functional group of people on the same team, the enterprise makes sure they have all of the skills and authorities they need to do their work—they become a single, self-contained unit, thereby avoiding time-consuming handoffs between different functional silos in the enterprise. Because the team is jointly accountable, it makes good decisions that don’t sacrifice one group’s needs for another’s.

In the IT world, today’s best practices revolve around DevOps. In the past, IT development and operations were two separate functional silos: IT development wrote new code, and IT operations managed that code when it was running in production. There was a handoff from development to operations when the code was ready, and the two groups worked to conflicting incentives: development wanted to deliver new functions as quickly as possible, and operations wanted stability. The DevOps solution was to make a single team responsible for both development and operations—thus the name. The slogan was: “You build it, you run it.” The team as a whole was incentivized to build quickly while at the same time ensuring stability.

It didn’t take long to realize that there was another functional silo with a somewhat different set of interests: security. Security, the IT profession realized, should be built into code as it is developed rather than added later on by a different team. Predictably, the idea became known as DevSecOps or some variation on that term.

In A Seat at the Table: IT Leadership in the Age of Agility, I introduced the idea of teams that brought together business folks and technologists with shared incentives. The goal was to avoid the handoff of requirements from business stakeholders to the IT team, and instead make everyone responsible for accomplishing a business objective. I didn’t use the term, but why don’t we just call it DevSecBizOps.

Well, today I’d like to introduce you to FinOps, the idea of combining financial accountability with autonomous team delivery. Delivery teams can be made responsible not just for delivering code, operating the code, securing the code, and making sure that the code accomplishes its objectives, but also for managing its costs, both fixed and variable.

Why does this make sense? Costs, like performance and other characteristics of the code, can be optimized by taking feedback from actual performance and using it to tweak the code. Just as the team might monitor the performance (speed, availability, etc.) of the code, they could just as well monitor the costs of their code, both in the infrastructure it consumes in the cloud, and in the licensing fees paid to third-parties whose products or services are used by the code.

They could then make good tradeoffs and optimization decisions that factor in costs. For example, if the code used an Oracle database, the delivery team would have an incentive to find a less expensive database that maintained the equivalent level of operational performance (or better or even somewhat worse, as appropriate). As I describe in a related post (Micro-Optimization) the team could look at the operational cost of each of the microservices it built or used off-the-shelf and find ways to reduce their cost, perhaps by rebuilding them or re-designing the system to avoid using them.

If the code was deployed using automated scripts into a particular instance type, the team could consider using a less powerful instance type or fewer replicated nodes. Really, there are a tremendous number of variables the team could work with in managing costs.

In other words, the team would be accountable for all aspects of its piece of code, including its cost—its cost to develop and deliver, its cost to secure, its cost to run, its cost to upgrade, and everything else. It would be the part of the enterprise best positioned to take on this responsibility, as it could make the best tradeoffs and would have access to the most detailed cost data.

To move toward this model, the first step for an enterprise would be to give the team transparency into the costs associated with its code. This is facilitated by the micro-optimization I described in the other post. The team would be able to track, at a detailed level, all of the aspects of cost for its service, and would be able to see how it changed over time.

The second step would be to get the right skills onto the team. DevOps usually favors T-shaped people for its teams—that is, people who have a broad set of skills but go deep in one area. So the financial skills might be secondary skills of someone already on the team (most likely of someone who has a background in IT infrastructure and operations).

The next step would be to change the team’s delivery process to include consideration of costs. A good model for this is the way we handled automated performance testing at USCIS. Every time new code was checked in, the entire system was rebuilt and re-tested. Among those tests was a test of the code’s performance (speed) when executing a certain set of transactions. The results were considered historically—if the code suddenly performed more slowly when someone’s new code was checked in, then the problem was flagged and the developer could investigate whether the slowness could be avoided. The same technique could be used for costs: every time new code is checked in, it could be tested against a cost baseline.

Finally, the enterprise would establish aggregate reporting of costs and set priorities and incentives for managing costs within the code.

That’s it—DevSecFinBizOps, for your enjoyment and implementation.

Mark

drpratten
1 day ago
Sydney, Australia

## Mathematicians Seal Back Door to Breaking RSA Encryptionby Kevin Hartnett Monday December 17th, 2018 at 1:55 PM

1 Share

My recent story for Quanta explained a newly proved phenomenon that might seem surprising from a naive perspective: Virtually all polynomials of a certain type are “prime,” meaning they can’t be factored.

The proof has implications for many areas of pure mathematics. It’s also great news for a pillar of modern life: digital encryption.

The main technique we use to keep digital information secure is RSA encryption. It’s a souped-up version of the encryption scheme a seventh grader might devise to pass messages to a friend: Assign a number to every letter and multiply by some secretly agreed-upon key. To decode a message, just divide by the secret key.

RSA encryption works in a similar fashion. In simplified form, it goes like this: A user starts with a message and performs arithmetic on it that involves multiplication by a very large number (hundreds of digits long). The only way to decode the message is to find the prime factors of the resulting product.1 The security of RSA encryption rests on the fact that there’s no fast way to identify the prime factors of very large numbers. If you’re not the intended recipient of a message — and if you therefore lack the right key to decode it — you could search for a thousand years with the best computers and still not find the right prime factors.

But there is a back door, and it has to do with polynomial equations. Every number can be represented as a unique polynomial equation. While it’s hard to find the prime factors of a number, it’s easy to find the factors of a polynomial. And once you know the factors of a polynomial, you can use that information to find the prime factors of the number you started with.

Here’s how it works.

Step One: Pick a number whose prime factors you’d like to know. To take a simple example, let’s use the number 15.

Step Two: Convert 15 into binary notation:

1111.

Step Three: Turn that binary expression into a polynomial by treating the binary digits as coefficients of a polynomial:

x3 + x2 + x + 1.

(Note that this polynomial equals 15 when x = 2, because 2 is the base of binary notation.)

Step Four: Factor the polynomial:

(x2 + 1) × (x + 1).

Step Five: Plug x = 2 into each of those factors:

(22 + 1) = 5

(2 + 1) = 3.

Conclusion: 5 and 3 are the prime factors of 15.

This seems like a complicated way to find the prime factors of a small number like 15, whose factors are easy to spot straightaway. But with very large numbers — numbers with hundreds of digits — this polynomial method gives you a remarkable advantage. There’s no fast algorithm for factoring large numbers. But there are fast algorithms for factoring large polynomials. So once you convert your large number to a large polynomial, you’re very close to finding the number’s prime factors.

Does this mean RSA encryption is in trouble? Actually, no. The reason for this has to do with the new proof about polynomials. The mathematicians Emmanuel Breuillard and Péter Varjú of the University of Cambridge proved that as polynomials with only 0 and 1 as coefficients get longer, they’re less and less likely to be factorable at all. And if a polynomial can’t be factored, it can’t be used to identify the prime factors of the number it’s based on.

Breuillard and Varjú’s proof effectively slams shut the polynomial back door for breaking RSA encryption. The very large numbers used in RSA encryption correspond to very long polynomials. Breuillard and Varjú proved that it’s nearly impossible to find polynomials of that length that can be factored. Mathematicians and cryptographers alike have long suspected this is the case. But when the cybersecurity of the entire world depends on some mathematical hack not working, it’s good to have proof that it doesn’t.

drpratten
1 day ago
Sydney, Australia

## GraphIt: A high-performance graph DSLby adriancolyer Wednesday December 12th, 2018 at 3:10 AM

1 Share

GraphIt: a high-performance graph DSL Zhang et al., OOPSLA’18

The problem with finding the optimal algorithm and data structures for a given problem is that so often it depends. This is especially true when it comes to graph algorithms.

It is difficult to implement high-performance graph algorithms. The performance bottlenecks of these algorithms depend not only on the algorithm and the underlying hardware, but also on the size and structure of the graph. As a result, different algorithms running on the same machine, or even the same algorithm running with different types of graph on the same machine, can exhibit different performance bottlenecks.

What we’d like therefore, is some way of expressing a graph algorithm at a high level such that we can map it into different implementations, each applying different optimisations as needed. For bonus points, we could then automate the search within the optimisation space to find the best performing combination for the circumstances at hand.

This is exactly what GraphIt does. GraphIt combines a DSL for specifying graph algorithms with a separate scheduling language that determines implementation policy. You can specify a schedule yourself, or use autotuning to discover optimal schedules for you.

Compared to six state-of-the-art in-memory graph processing frameworks, GraphIt outperforms all of them (by up to 4.8x) in 24 out of 32 experiments in the evaluation. It is never more than 43% slower than the fastest framework across all 32 experiments. At the same time, the algorithms expressed using GraphIt require up to an order of magnitude less code to express.

We can think about the various possible implementation choices for graph algorithms as making trade-offs in three main dimensions:

1. Locality – the amount of spatial and temporal reuse in a program
2. Work efficiency – the inverse of the number of instructions (weighted by cycles per instructions) required (i.e., if it takes more instruction cycles, it’s less efficient)
3. Parallelism – the amount of work than can be executed independently by different processing units

There’s a whole catalog of different possible optimisations that move us within this space. The following table shows their impact compared to a baseline sparse push implementation:

The sparse push baseline (as implemented for the PageRankDelta algorithm) looks like Fig 2 below. On each iteration each vertex on the frontier sends its delta (change in rank value) to its out-neighbours. The set of vertices in the frontier are maintained in a sparse data structure.

• In DensePull we instead have each vertex iterate over its incoming neighbours
• In DensePush we don’t bother maintaining the frontier in a sparse format and instead iterate over all vertices checking each one to see if it’s in the frontier as we go.
• DensePull-SparsePush and DensePush-SparsePush are hybrids that use SparshPush, DensePull, and DensePush in different directions based on the size of the active set.
• For the representation of the frontier itself we can choose between various representations such as bitvectors, boolean arrays, and sparse arrays.

For each of the above traversal modes we have different choices for parallelisation. We can process vertices in parallel (vertex-parallel), or for heavily skewed graphs where the workload of a vertex is proportional to its number of incoming edges we might prefer an edge-aware_vertex_parallel approach instead. Alternatively we could go all in and use an edge-parallel approach.

We can potentially employ cache partitioning, trying to keep random accesses within the last level cache (LLC). Vertices are partitioned into Segmented Subgraphs (SSGs). This improves locality but harms work efficiency due to vertex data replication from graph partitioning and merging of partial results. NUMA partitioning executes multiple SSGs in parallel on different sockets.

Vertex data layout (arrays of structs or structs of arrays) can also affect locality of memory accesses. Consider a random lookup in an array and then accessing two fields from that vertex (arrays of structs) versus two random lookups (structs of arrays). However, the former approach expands the working set size for no benefit if fields are not typically accessed together.

Finally, when two graph kernels have the same traversal pattern we can fuse their traversals (kernel fusion).

That’s a lot of choices, and a fine way of polluting expression of the core algorithm with optimisation details.

### The GraphIt DSL for expressing graph algorithms

GraphIt offers a high level DSL for expression algorithms at a level above these concerns. Here’s PageRankDelta in full, including all the setup:

The heart of the matter is expressed in lines 31 through 38. On line 32, the `from` operator ensures that only edges with a source vertex in the frontier are traversed, and the `apply` operator acts on the selected edges.

This separation enables the compiler to generate complex code for different traversal modes and parallelization optimizations, while inserting appropriate data access and synchronization instructions for the `updateEdge` function.

As a comparison, the three core lines of the GraphIt expression require 16 lines of code of Ligra:

### The scheduling language

Looking again at the GraphIt DSL version of PageRankDelta, notice the funny `#s1#` label on line 32. These labels are used to identify statements on which optimisations can apply, and a scheduling language maps these labels to points in the optimisation space.

The scheduling functions allow choices to be made in each of the optimisation areas we looked at earlier:

For example, the following schedule elects to use the “DensePull-SparsePush” traversal and leaves all other configuration options at their default settings:

The following examples show pseudocode for PageRankDelta generated with different schedules.

(Enlarge)

These are just some of the easier examples, throw in NUMA and cache optimisation as well and it can get really hairy!. The total optimisation space is as follows:

(Enlarge)

### Compilation and the graph iteration space

Under the covers, a graph is represented by an adjacency matrix, and the graph iteration space is represented in four dimensions based: the Segmented Subgraph ID (SSG_ID); a Blocked Subgraph ID (BSD_ID); an OuterIter ID indicating column position when iterating over columns and an InnerIter ID indicating row position when iterating over rows.

(Enlarge)

Each of the four dimensions in the graph iteration space is annotated with tags to indicate the traversal direction and optimisation strategies from the schedule. The scheduling language is mapped down to graph iteration space vectors and tags (details in section 5 of the paper), from which the GraphIt compiler generates optimised C++ code (details in section 6.1 – 6.3).

### Autotuning

GraphIt can have up to $10^5$ valid schedules with each run taking more than 30 seconds for our set of applications and input graphs. Exhaustive searches would require weeks of time. As a result, we use OpenTuner to build an autotuner on top of GraphIt that leverages stochastic search techniques to find high-performance schedules within a reasonable amount of time.

### Evaluation

GraphIt is compared against six different state-of-the-art in-memory graph processing frameworks on a dual socket system with 12 cores each and 128GB of memory. The input datasets used for the evaluation are shown in the following table.

Here are the execution times for GraphIt versus the other systems across a variety of algorithms over these datasets:

And here is a line-of-code comparison for expressing those algorithms:

drpratten
6 days ago
Sydney, Australia

## WireGuard gets a new, user-friendly service providerby Jim Salter Tuesday December 11th, 2018 at 1:34 PM

1 Share

Following our earlier WireGuard coverage, commercial VPN provider IVPN's chief marketing officer reached out to me to let me know his company was adding WireGuard support to its offering and asked if I'd be interested in covering the launch. Honestly, I planned to brush him off—there are a million VPN providers out there, and at least 999,000 of them are pretty shady—so I answered with a quick, dirty trick question: what are you doing on the Windows side?

Viktor surprised me with a picture-perfect answer that ruined my plans to get rid of him fast:

The official Ars stance on VPN recommendations is that we can't recommend anyone whose policies we can't independently verify and whose log retention we can't audit ourselves. This sounds like a cop-out from having to make a recommendation, but this is a service that readers will likely be putting a significant amount of trust in, and it would be irresponsible to give a recommendation that important without being able to provide assurances.