David Pratten is passionate about leading IT-related change projects for social good.
1342 stories
·
0 followers

Collegiate Teams Compete at RC Airplane Heavy-Lift Challenge

1 Share

Who knew that RC flying and weightlifting could be morphed together? All you have to do is omit the dumbbells and tight Lycra outfits of weightlifting. Then get rid of the aerobatics and crashes of RC flying. Oh, wait…keep the crashes. There are lots of those in RC heavy-lifting!

The Society of Automotive Engineers (SAE) has been hosting heavy-lift competitions for collegiate teams since the 1980s. It has grown to include two events each year, SAE Aero Design East and West. Not only is the competition fierce, just getting in can be a challenge. Spots fill up fast and many teams are pushed to a waiting list.

SAE Aero Design West for 2017 took place in Fort Worth, Texas during the second weekend in March. SAE and the Fort Worth Thunderbirds RC club hosted more than 70 teams from colleges and universities all over the world. Most of the teams had been preparing for months to get to this point. Some would find success as aerial pack mules. Others, well…not so much.

Mighty Micros

The event is divided into three distinct classes: Micro, Regular, and Advanced. While the overall goal for every team is to carry a relatively heavy load, each class has its own specific rules and objectives. In the Micro class, the airplanes could be disassembled into subcomponents. These pieces were stored in a tube no more than 6 inches (15.2 cm) in diameter. The length of the tube was not defined, but the overall weight of the loaded tube could not exceed 10 pounds (4.5 kg). The scoring system encouraged smaller tubes. Some teams managed to pack their model into tubes as short as 3.5 inches (8.9 cm)!

Micro-class airplanes could be disassembled to fit in a 6"-diameter tube. The shorter the better.

As the class name and its packaging requirements suggest, the Micro models were often quite small. Many had wingspans of 30 inches to 40 inches (76.2 cm-101.6 cm). These little electric-powered flying machines had the most variation in design of the three classes. There were definitely some very clever and eclectic ideas on display.

The prime goal for the Micro-class flyers was to complete one orbit of the flying field while carrying a payload. Each flight started with a hand launch. The landings were particularly challenging in this class because even a successful flight would earn zero points if any part of the airplane (other than a propeller blade) detached when flying or landing. Lacking landing gear, belly landings on the grass field took their destructive toll on the heavily-laden micro contenders.

Additional points could be earned for speedy assembly of a micro model. The clock started with the airplane tucked away in its tube and stopped when it was ready to take flight. The fastest teams completed the transformation in well under a minute.

Not-So-Regular Airplanes

"Regular Class" is a horribly inaccurate misnomer. There was nothing regular about these impressively large and graceful airplanes. The Regular-class contenders pretended to be airliners. Their goal was to haul tennis balls that represented paying passengers. At around 2 ounces (56.7 gm) each, a 2.6-inch-diameter (5.6cm) tennis ball takes up plenty of space, but isn't much of a mass burden. So the airplanes were also required to carry 8 to 12 ounces (227 gm to 340 gm) of "luggage" for each passenger. That mass added up quickly!

Regular-class models carried tennis ball "passengers" and their simulated luggage.

Regular-class models had no size restrictions, but were limited in the amount of power that their electric motors could produce. The power systems had onboard governors that kept power capped at 1000 watts. While that may sound like a lot of power, keep in mind that we're talking about very big airplanes hauling 20 to 30 pounds (9 kg to 14 kg) of tennis balls and metal ballast. Tribal knowledge in the RC community dictates that you need at least 50 watts per pound (total flying weight) for a viable power system. Most RC sport airplanes have 100 watts per pound or more. My rough estimates suggest that some of the Regular-class planes were taking wing at around 25 watts per pound!

As with the Micro class, Regular-class competitors had to complete one circuit of the field for a successful flight. The Regular models, however, were equipped with landing gear and could use up to 200 feet of runway for takeoff. That presented its own unique problems, as I'll explain in a bit.

Humanitarian Engineers

The Advanced-class airplanes were similar in size to many of the Regular models but their mission made them inherently more complex. These airplanes carried both a static payload, as well as one or more payloads that were released in flight. These droppable payloads, each weighing about 2.25 pounds (1 kg), represented humanitarian aid packages being delivered to needy recipients.

The drop zone was a series of concentric circles painted onto the grassy field beside the runway. Its outer diameter was 120 feet (36.6 m). The pilot was guided over the target via verbal commands from teammates using a video feed and data telemetry streaming from the airplane. A payload specialist released the payload at (hopefully) just the right time. The closer the "packages" were to the center of the target, the higher the score.

Airplanes in the Advanced class released payloads into a marked drop zone.

Rather than electric motors, Advanced-class airplanes were powered by internal combustion engines. The total allowable displacement for all engines was .46 cubic inches (7.5 cc). Most contenders chose to use a single engine, but one twin competed as well.

Serious Business

This event is not just a fun weekend at the flying field. The first day of the competition took place completely indoors as teams presented their models for technical inspections. The airplanes had to meet very strict design and safety requirements to get an approving nod from the judges.

All airplanes had to undergo a thorough inspection before being allowed to fly. (Lee Ray photo)

At this early stage in the competition, it was clear that many of the competitors had no prior experience with RC models. We all have to start somewhere. Equally evident was the fact that many of the foreign teams did not have easy access to common off-the-shelf RC components…not that they would (or should) let that deter them. Even simple items like control surface hinges had to be fabricated by some teams. Such examples of ingenuity born out of necessity were impressive.

Teams gave engineering presentations about their models and answered questions for judges. (Lee Ray photo)

The teams also has to present written and oral reports about their designs. Judges asked pointed questions about design choices. They also probed into some of the teamwork challenges that may have creeped in over the course of the project. These activities were graded and factored into the team's overall score.

Achilles' Wheel

Flying began early Saturday morning. As in life, there were no throw-away flight rounds. Every flight (or lack of) counted. Many teams quickly found out that it was deceptively easy to score a goose egg. All it took was a soft hand launch or a miscommunication at the starting line for a flight to end before it ever began.

Micro-class models had to be hand-launched and belly-landed, which caused problems for a few teams.

There were a lot of crashes in that first round, especially in the Micro class. Something more amazing than the sheer number of crashes, was how many airplanes returned for round two. Throughout the weekend, crashed models that would normally be thrown into a garbage can were miraculously pieced back together, inspected, and flown again.

Teams could provide their own pilot or use one of the ace pilots provided by the Thunderbirds club. Those guys were good! There were lots of tense moments as these veteran flyers coaxed overloaded airplanes around the pattern time after time. No matter who was at the controls, every successful landing earned cheers from the crowd.

By the third round of flying, the more consistent and well-prepared teams began to emerge in each class. Not that any team was immune from gremlins. Some groups just found more-effective ways of keeping them at bay time after time.

There was a huge variety of designs that competed. This Rogallo-wing model captured first place in the Micro class.

What I found interesting was how many of the failures were unrelated to the airframe design or craftsmanship of the teams. For the most part, that work was solid. A lot of unsuccessful flights in the Regular class were due to over-burdened landing gear. Wheels and mechanisms that would likely perform just fine on a standard RC model just couldn't cope with the extra weight of these loaded down airplanes.

The Advanced-class drop zone seemed to have an invisible shield surrounding it. Teams were allowed to drop their payloads from as low as 100 feet (30.5 m), but close drops were elusive. Only 5 of the 17 teams managed to score any drop points at all. Then in the final, 6th round, Georgia Tech placed all of their packages right on the bullseye. That feat helped push them to the top of the class.

Advanced-class teams used video and telemetry systems to help aim their payload drops.

The final two flight rounds were completed on Sunday morning. Some teams that had been plagued by problems all weekend were finally able to get in a successful flight. Those landings resulted in high-5s and much relief. Unfortunately, a few of the teams were never able to complete a flight. Watch out for those teams next year…they're hungry.

Results

After the final flight round, points were tallied and the winners in each class were named. There's no doubt that all of the participants walked away with a much better understanding of what it takes to design an airplane. This competition illustrated that the rigid technical side of things is important, but so is the softer, human aspect of coming together to make things work.

This was an RC event unlike any that I had ever been to before. Even if you're not into RC airplanes, there was plenty to keep spectators intrigued and entertained. I look forward to returning next year. For those of you on the east coast, you can still catch this year's SAE Aero Design East in Lakeland, Florida April 21-23.

Winners

Micro Class:
1. Georgia Institute of Technology
2. Louisiana State University
3. Texas A&M University

Regular Class:
1. University of Cincinnati
2. Texas A&M University
3. University of Manitoba

Advanced Class:
1. Georgia Institute of Technology
2. California State University – Northridge
3. University of Michigan – Ann Arbor

Terry is a freelance writer living in Lubbock, Texas. Visit his website atTerryDunn.organd follow him onTwitterandFacebook. You can also hear Terry talk about RC hobbies as one of the hosts of theRC Roundtablepodcast.

Read the whole story
drpratten
6 hours ago
reply
Sydney, Australia
Share this story
Delete

Pyret

1 Share

This document has detailed information on the Pyret grammar and the behavior of its expression forms and built-in libraries, along with many examples and some longer descriptions of language design choices. If you want to do something in a program and you can’t find how in this document, feel free to post a message on the Pyret discussion list, and we’ll be happy to help.

If you want to learn about (or teach!) programming and computer science using Pyret, check out Programming and Programming Languages (PAPL), which is a textbook on programming with all its examples in Pyret.

Read the whole story
drpratten
2 days ago
reply
Sydney, Australia
Share this story
Delete

Simplifying complex business logic with Python's Kanren · Jefferson Heard

1 Share
         coding      python · tutorials

So-called “logic programming” has been a niche programming topic since Prolog was introduced in the 80s. In my experience, most posts that cover logic programming introduce the core concepts and stop there. The examples they give are mostly toy problems. This post, then, will start with “what you can do with logic programming in Python” and move toward the core concepts that way.

If you’re looking for an explanation of unification or a history of logic programming, and why you should even write web-servers this way, there are plenty of posts that will extoll the virtues of logic programming over other methods. This is not that post. I’m aware of these things, but my goal in this post is to help you take the part of your code that is least maintainable as written in a traditional Python style and make it cleaner, clearer, and less prone to bugs using logic programming via the Kanren library.

Not what it is, but what is it for?

Kanren provides you a way to simplify the way you specify and make your code respond to “business logic.” Business logic is an ill defined term, but in my experience it consists of all those if-then-else statements, nested cases, and rats’ warrens of callbacks that evolve over time in complex applications that focus either on complex data processing, or on responding to users who are themselves experts at something.

Kanren lets you express this logic in terms of rules and facts. I use Kanren to do things like consistency checks in entered data, validity checks for records that are POSTed to my APIs, and to perform complex filtering on users and records that don’t translate well into database queries.

Before we get started, you might want to do a quick:

$ pip install kanren

For if not then-else, what?

Although I will work to something more substantial, let’s start with a Hello World. I start here because logic-programming is different enough to the way most programmers think that a tiny, self-contained example will illustrate some basic points.

>>> from kanren import run, eq, var
>>> x = var()
>>> run(1, x, eq(x, 5))
(5,)

We’ll skip the import and focus on the next statement. x = var() declares a variable, which run will try to find one or more values for. run is a function that takes the following:

  • The number of results you want.
  • The variables whose values you are interested in.
  • The set of rules that defines the space of valid values for your variables.

The third bit is the most important, because it gives us a clue as to what eq(x, 5) means. It does not mean “assign 5 to x”. Instead it constrains the result set so that it only includes results where x is equal to 5. What’s the difference?

It will take a more complex example to truly show the difference, but for now suffice to say that eq(x, 5) works much more like the condition part of an if statement than a statement inside the if:

for x, y, z in all_possibilities:
  if other_logic:
    
    if x == 5:
      yield (x,)

In reality, Kanren is a highly efficient, optimizing evaluator of logical expressions. There is (usually) no loop, but for illustration purposes, this is what our example “means”. You can already see that we’ve taken a hairball of potentially nested ifs and fors to a flat, sequential code structure in our example.

A (slightly) more illustrative example

>>> set_a = {1, 2, 3}
>>> set_b = {2, 3, 4}
>>> run(2, x, (membero, x, set_a),  
              (membero, x, set_b))  
(2, 3)
>>> run(1, x, (membero, x, set_a),  
              (membero, x, set_b))  
(2,)

This example, taken from the Kanren README is a little more illustrative. It uses a new (to us) primitive, membero to require that x be a member of a set. Note that the structure we’re checking membership of only has to be iterable. It does not have to be a literal python set. Kanren operates on primitive python types and their analogues, so if it swims like a duck and quacks like a duck, then it’s a duck for Kanren purposes. There are no new data structures to learn, conversions to make, or classes to unpack.

I also introduced a different way to write the predicate. Instead of membero(x, set_a), I wrote (membero, x, set_a). Although possibly a bit less readable at first, nested structures are more readable this way, and I find that after using the library in my own projects for a year or two, I like this style better than the other.

Now we see a new behavior of run. It takes any number of clauses at the end of the parameter list, and provides the logical and of all of them. For our purposes, we want two values of x that satisfy all the predicates.

Satisfying the first predicate, (membero, x, set_a) are the values 1, 2, and 3, since these are the members of set_a. Satisfying the second predicate are the values 2, 3, and 4, the members of set_b. The only results shared between the two are 2 and 3, so these are the results of our call to run.

In the first instance of run, we ask for two results. Each result is a single value of x (as opposed to one set of members that match) and so we get a tuple consisting of both matching numbers. If we ask for only one result, we get just one number. This is important, because as I said earlier, Kanren works on so- called duck typing (walks, swims, quacks, therefore serves the purposes of a duck even if you happen to call it a swan). This means results can be a tuple of numbers, dicts, tuples, lists, or custom types – anything that can be compared in the way the predicates do comparisons. This makes Kanren very pythonic and very useful.

Making it more relatable

This is all fine, but it’s hardly something that by itself will make our logic more readable. For that, we need to talk about relations and facts. Here is an example adapted from the Kanren README:

>>> from kanren import Relation, facts
>>> parent = Relation()
>>> facts(parent, ("Homer", "Bart"),
...               ("Marge", "Bart"),
...               ("Homer", "Lisa"),
...               ("Marge", "Lisa"),
...               ("Homer", "Maggie"),
...               ("Marge", "Maggie"),
...               ("Abe",  "Homer"))

Now let’s get one of the parents of Bart:

>>> run(1, x, (parent, x, "Bart"))
('Marge',)

Two of Homer’s children:

>>> run(2, x, parent("Homer", x))
('Bart', 'Lisa')

Note that there’s no order. The answer could have easily been “Homer” to the first one or Lisa and Maggie to the second statement.

Now, to show that relations are more than just fancy ways to construct tuples, let’s figure out grandparents. We use an intermediate variable, y to represent the parent of Bart. Then x is then the parent of the parent of bart.

>>> y = var()
>>> run(1, x, parent(x, y),
              parent(y, 'Bart'))
('Abe',)

>>> run(1, (x, y), parent(x, y),
                   parent(y, 'Bart'))
(('Abe', 'Homer'),)

This shows off Kanren’s advanced form of pattern matching known as “unification.” Unification and backtracking are really not in the scope of this tutorial, but you may find it helps to understand them in detail as you use Kanren in your own programs. In that case, start with Kanren’s README and work from there. For now it is enough to consider that this works and its implications for writing cleaner Python code.

Note we can list more than one variable we are interested in the value of. This will create a nested tuple of variable values in the same respective order as they are listed in run.

How might we have written this reasonably (if naïvely) in non-Kanren Python?

>>> parent_child = {
...   "Homer": ("Bart", "Lisa", "Maggie"),
...   "Marge": ("Bart", "Lisa", "Maggie"),
...   "Abe": ("Homer",)
... }


>>> parent_child['Homer'][0:2]
("Bart", "Lisa")



>>> barts_parents = []
>>> for parent in parent_child:
...   if 'Bart' in parent_child[parent]:
...     barts_parents.append(parent)


>>> barts_grandparents = []
>>> for parent in barts_parents:  
...   for grandparent in parent_child:
...     if parent in parent_child[grandparent]:
...       barts_grandparents.append(grandparent)  

The difference in legibility between the Kanren example and its admittedly naïve Python equivalent should be obvious. In the Kanren example, we describe relationships and assume they’re transitive. This not only serves to help us work from either direction in the relationship with the same statement, it also allows us to build these relationships up over time without having to maintain multiple dictionaries or describe relationships in terms of iteration and if statements.

For simple logic that will never grow, it may be that the above is acceptable, but it does tend to create code that people put big comments around warning the interns off touching it.

Applying it to a real-world example

Now for a more “real-world” test of Kanren. Let’s create a consistency test for a complex piece of JSON. First we’ll specify the JSON Schema for items in a coffee shop order:

{
  "type": "object",
  "required": ["order_destination"],
  "properties": {
    "order_destination": {"type": "string", "enum": ["espresso_machine", "pastry_counter"]},
  },
  "definitions": {
    "drink": {
      "type": "object",
      "required": ["size", "order_type"],
      "properties": {
        "size": {"type": "string", "enum": ["sm", "md", "lg", "xl"]},
        "drink_type": {"type": "string", "enum": ["drip", "espresso", "latte", "cappuccino", "americano"]},
        "extras": {"array": { "$ref": "#/definitions/extras" }},
      }
    },
    "pastry": {
      "type": "object",
      "required": ["quantity", "item"],
      "properties": {
        "item": {"type": "string", "enum": ["donut", "sandwich", "bagel", "danish"]},
        "quantity": {"type": "integer", "minValue": 1, "maxValue": 144},
        "heated": {"type": "boolean", "default": false}
      }
    },
    "extras": {
      "type": "object",
      "properties": {
        "flavoring": {"type": "string"},
        "milk_type": {"type": "string", "enum": ["soy", "almond", "skim"]}
      }
    }
  }
}

One thing that schema languages cannot often handle well are conditional requirements. Conditional requirements occur when:

  • The presence of a value in one field limits the valid values in another field, or
  • The presence of a value in one field requires the presence of another field.

In our case, the above schema defines an order at a coffee shop, but there are valid JSON documents that nevertheless will not contain all the information needed to complete an order. We need some extra validation steps. In particular,

  • Depending on the order type, we need to ensure the presence of one of the optional sections, espresso_machine or pastry_counter
  • Shots of espresso can only be small or medium - no large or xl
  • Cappuccinos can only be small, medium, or large. (we’re picky)
  • Shots of espresso do not have milk in them (or they’d be something else)
  • Americanos do not have milk.

We can create logic with Kanren that validates our JSON beyond what can simply be done with basic schema validation.

from kanren import *

def validate_order(order):

  
  must_contain_section = Relation()
  facts(must_contain_section, ('espresso_machine', 'drink'),
                              ('pastry_counter', 'pastry'))

  x = var()
  valid = run(1, x, must_contain_section(order['order_destination'], x),
                    membero(x, set(order.keys())))  

  if len(valid) == 0:
    raise ValidationError("Required section not present")
  elif order['order_destination'] == 'espresso_machine':  
    drink = order['drink']

    
    
    milk_comes_with = Relation('milk_comes_with')
    facts(milk_comes_with, ('drip', True),
                           ('latte', True),
                           ('cappuccino', True),
                           ('espresso', False),   
                           ('americano', False))  

    drink_sizes = Relation('drink_size')

    
    
    facts(drink_sizes, *(('drip', sz) for sz in ['sm', 'md', 'lg', 'xl']),
                       *(('latte', sz) for sz in ['sm', 'md', 'lg', 'xl']),
                       *(('americano', sz) for sz in ['sm', 'md', 'lg', 'xl']),
                       *(('cappuccino', sz) for sz in ['sm', 'md', 'lg']),
                       *(('espresso', sz) for sz in ['sm', 'md']))

    
    drink_type = drink['drink_type']

    
    specified_milk = False  
    for e in drink.get('extras', []):
      if 'milk_type' in e:
        specified_milk = True
        break

    
    y = var()    
    valid = run(1, y,
      drink_sizes(drink_type, drink['size']),  
      
      lany(  
        eq(specified_milk, False),  
        
        milk_comes_with(drink_type, specified_milk)))  

    if len(valid) == 0:
      raise ValidationError("Drink size too large for drink type or milk included in non milk drink")
  else:
    pass  

This results in the following passing validation:

validate_order({"order_destination": "espresso_machine",
                "drink": {"drink_type": "espresso",
                          "size": "sm"}})

validate_order({"order_destination": "espresso_machine",
                "drink": {"drink_type": "latte",
                          "size": "lg",
                          "extras": [{"milk_type": "soy"}]}})        

validate_order({"order_destination": "espresso_machine",
                "drink": {"drink_type": "latte",
                          "size": "lg"}})            

And the following will not pass validation:


validate_order({"order_destination": "espresso_machine",
                "drink": {"drink_type": "espresso",
                          "size": "lg"}})


validate_order({"order_destination": "espresso_machine",
      "drink": {"drink_type": "espresso",
                "size": "sm",
                "extras": [{"milk_type": "soy"}]}})



validate_order({"order_destination": "espresso_machine"})
Notes
  1. Here we make a set out of the properties of our “order” document. The full test makes sure that both clauses are true. So x must be the required section for our order type, and it must be present as a named property in our document.

Thus this is valid:

{"order_destination": "espresso_machine", "drink": {...}}

And this is not:

{"order_destination": "espresso_machine", "pastry": {...}}

Further thoughts

Reusability is your friend. So far we’ve only seen interactive usage of Kanren. What about embedding it in software? It’s probably obvious that you can wrap the run call in a function and work with the results, but it turns out you can wrap up and make relations and predicates reusable as well. See the Godfather example in Kanren’s source. You can even make custom types usable within Kanren’s logical relations.

There are things missing from the complex example. It’s possible to create much more complex validations using Kanren and all its primitives. There are also other ways to express logic more succinctly than we did in the example, however for an introduction, I think these can be too dense to be readily digested. Best to experiment with your code and see what works.

For further reading, I suggest starting with the specification of miniKanren, which was originally written in Scheme, and then the Python Kanren repo.

Read the whole story
drpratten
2 days ago
reply
Sydney, Australia
Share this story
Delete

Prioritising IT projects has never mattered more

1 Share

Prioritising IT projects has never mattered moreFirst published on CIO.com

IT Project prioritisation is crucial for your organisation.

Selecting the wrong projects, which don’t deliver strategic value or the required ROI can impact the bottom line and mean that your business reduces its chance of hitting targets and achieving goals. You should take a day or two (or longer if your portfolio and budgets demand more) and focus on the prioritisation and selection of projects and proposals so that you only commit to the ‘right’ projects for the organisation.

How to prioritise? How often should you do this?

Read the whole story
drpratten
5 days ago
reply
Sydney, Australia
Share this story
Delete

Magnus Magnus Clever Clever

1 Share
Black to play
Stefansson vs Carlsen

21. …?
See game for solution.
Difficulty Scale

about our puzzles

Read the whole story
drpratten
5 days ago
reply
Sydney, Australia
Share this story
Delete

Insert D-Wave Post Here

2 Shares

In the two months since I last blogged, the US has continued its descent into madness.  Yet even while so many certainties have proven ephemeral as the morning dew—the US’s autonomy from Russia, the sanity of our nuclear chain of command, the outcome of our Civil War, the constraints on rulers that supposedly set us apart from the world’s dictator-run hellholes—I’ve learned that certain facts of life remain constant.

The moon still waxes and wanes.  Electrons remain bound to their nuclei.  P≠NP proofs still fill my inbox.  Squirrels still gather acorns.  And—of course!—people continue to claim big quantum speedups using D-Wave devices, and those claims still require careful scrutiny.

With that preamble, I hereby offer you eight quantum computing news items.


Cathy McGeoch Episode II: The Selby Comparison

On January 17, a group from D-Wave—including Cathy McGeoch, who now works directly for D-Wave—put out a preprint claiming a factor-of-2500 speedup for the D-Wave machine (the new, 2000-qubit one) compared to the best classical algorithms.  Notably, they wrote that the speedup persisted when they compared against simulated annealing, quantum Monte Carlo, and even the so-called Hamze-de Freitas-Selby (HFS) algorithm, which was often the classical victor in previous performance comparisons against the D-Wave machine.

Reading this, I was happy to see how far the discussion has advanced since 2013, when McGeoch and Cong Wang reported a factor-of-3600 speedup for the D-Wave machine, but then it turned out that they’d compared only against classical exact solvers rather than heuristics—a choice for which they were heavily criticized on this blog and elsewhere.  (And indeed, that particular speedup disappeared once the classical computer’s shackles were removed.)

So, when people asked me this January about the new speedup claim—the one even against the HFS algorithm—I replied that, even though we’ve by now been around this carousel several times, I felt like the ball was now firmly in the D-Wave skeptics’ court, to reproduce the observed performance classically.  And if, after a year or so, no one could, that would be a good time to start taking seriously that a D-Wave speedup might finally be here to stay—and to move on to the next question, of whether this speedup had anything to do with quantum computation, or only with the building of a piece of special-purpose optimization hardware.


A&M: Annealing and Matching

As it happened, it only took one month.  On March 2, Salvatore Mandrà, Helmut Katzgraber, and Creighton Thomas put up a response preprint, pointing out that the instances studied by the D-Wave group in their most recent comparison are actually reducible to the minimum-weight perfect matching problem—and for that reason, are solvable in polynomial time on a classical computer.   Much of Mandrà et al.’s paper just consists of graphs, wherein they plot the running times of the D-Wave machine and of a classical heuristic on the relevant instances—clearly all different flavors of exponential—and then Edmonds’ matching algorithm from the 1960s, which breaks away from the pack into polynomiality.

But let me bend over backwards to tell you the full story.  Last week, I had the privilege of visiting Texas A&M to give a talk.  While there, I got to meet Helmut Katzgraber, a condensed-matter physicist who’s one of the world experts on quantum annealing experiments, to talk to him about their new response paper.  Helmut was clear in his prediction that, with only small modifications to the instances considered, one could see similar performance by the D-Wave machine while avoiding the reduction to perfect matching.  With those future modifications, it’s possible that one really might see a D-Wave speedup that survived serious attempts by skeptics to make it go away.

But Helmut was equally clear in saying that, even in such a case, he sees no evidence at present that the speedup would be asymptotic or quantum-computational in nature.  In other words, he thinks the existing data is well explained by the observation that we’re comparing D-Wave against classical algorithms for Ising spin minimization problems on Chimera graphs, and D-Wave has heroically engineered an expensive piece of hardware specifically for Ising spin minimization problems on Chimera graphs and basically nothing else.  If so, then the prediction would be that such speedups as can be found are unlikely to extend either to more “practical” optimization problems—which need to be embedded into the Chimera graph with considerable losses—or to better scaling behavior on large instances.  (As usual, as long as the comparison is against the best classical algorithms, and as long as we grant the classical algorithm the same non-quantum advantages that the D-Wave machine enjoys, such as classical parallelism—as Rønnow et al advocated.)

Incidentally, my visit to Texas A&M was partly an “apology tour.”  When I announced on this blog that I was moving from MIT to UT Austin, I talked about the challenge and excitement of setting up a quantum computing research center in a place that currently had little quantum computing for hundreds of miles around.  This thoughtless remark inexcusably left out not only my friends at Louisiana State (like Jon Dowling and Mark Wilde), but even closer to home, Katzgraber and the others at Texas A&M.  I felt terrible about this for months.  So it gives me special satisfaction to have the opportunity to call out Katzgraber’s new work in this post.  In football, UT and A&M were longtime arch-rivals, but when it comes to the appropriate level of skepticism to apply to quantum supremacy claims, the Texas Republic seems remarkably unified.


When 15 MilliKelvin is Toasty

In other D-Wave-related scientific news, on Monday night Tameem Albash, Victor Martin-Mayer, and Itay Hen put out a preprint arguing that, in order for quantum annealing to have any real chance of yielding a speedup over classical optimization methods, the temperature of the annealer should decrease at least like 1/log(n), where n is the instance size, and more likely like 1/nβ (i.e., as an inverse power law).

If this is correct, then cold as the D-Wave machine is, at 0.015 degrees or whatever above absolute zero, it still wouldn’t be cold enough to see a scalable speedup, at least not without quantum fault-tolerance, something that D-Wave has so far eschewed.  With no error-correction, any constant temperature that’s above zero would cause dangerous level-crossings up to excited states when the instances get large enough.  Only a temperature that actually converged to zero as the problems got larger would suffice.

Over the last few years, I’ve heard many experts make this exact same point in conversation, but this is the first time I’ve seen the argument spelled out in a paper, with explicit calculations (modulo assumptions) of the rate at which the temperature would need to go to zero for uncorrected quantum annealing to be a viable path to a speedup.  I lack the expertise to evaluate the calculations myself, but any experts who’d like to share their insight in the comments section are “warmly” (har har) invited.


“Their Current Numbers Are Still To Be Checked”

As some of you will have seen, The Economist now has a sprawling 10-page cover story about quantum computing and other quantum technologies.  I had some contact with the author while the story was in the works.

The piece covers a lot of ground and contains many true statements.  It could be much worse.

But I take issue with two things.

First, The Economist claims: “What is notable about the effort [to build scalable QCs] now is that the challenges are no longer scientific but have become matters of engineering.”  As John Preskill and others pointed out, this is pretty far from true, at least if we interpret the claim in the way most engineers and businesspeople would.

Yes, we know the rules of quantum mechanics, and the theory of quantum fault-tolerance, and a few promising applications; and the basic building blocks of QC have already been demonstrated in several platforms.  But if (let’s say) someone were to pony up $100 billion, asking only for a universal quantum computer as soon as possible, I think the rational thing to do would be to spend initially on a frenzy of basic research: should we bet on superconducting qubits, trapped ions, nonabelian anyons, photonics, a combination thereof, or something else?  (Even that is far from settled.)  Can we invent better error-correcting codes and magic state distillation schemes, in order to push the resource requirements for universal QC down by three or four orders of magnitude?  Which decoherence mechanisms will be relevant when we try to do this stuff at scale?  And of course, which new quantum algorithms can we discover, and which new cryptographic codes resistant to quantum attack?

The second statement I take issue with is this:

“For years experts questioned whether the [D-Wave] devices were actually exploiting quantum mechanics and whether they worked better than traditional computers.  Those questions have since been conclusively answered—yes, and sometimes”

I would instead say that the answers are:

  1. depends on what you mean by “exploit” (yes, there are quantum tunneling effects, but do they help you solve problems faster?), and
  2. no, the evidence remains weak to nonexistent that the D-Wave machine solves anything faster than a traditional computer—certainly if, by “traditional computer,” we mean a device that gets all the advantages of the D-Wave machine (e.g., classical parallelism, hardware heroically specialized to the one type of problem we’re testing on), but no quantum effects.

Shortly afterward, when discussing the race to achieve “quantum supremacy” (i.e., a clear quantum computing speedup for some task, not necessarily a useful one), the Economist piece hedges: “D-Wave has hinted it has already [achieved quantum supremacy], but has made similar claims in the past; their current numbers are still to be checked.”

To me, “their current numbers are still to be checked” deserves its place alongside “mistakes were made” among the great understatements of the English language—perhaps a fitting honor for The Economist.


Defeat Device

Some of you might also have seen that D-Wave announced a deal with Volkswagen, to use D-Wave machines for traffic flow.  I had some advance warning of this deal, when reporters called asking me to comment on it.  At least in the materials I saw, no evidence is discussed that the D-Wave machine actually solves whatever problem VW is interested in faster than it could be solved with a classical computer.  Indeed, in a pattern we’ve seen repeatedly for the past decade, the question of such evidence is never even directly confronted or acknowledged.

So I guess I’ll say the same thing here that I said to the journalists.  Namely, until there’s a paper or some other technical information, obviously there’s not much I can say about this D-Wave/Volkswagen collaboration.  But it would be astonishing if quantum supremacy were to be achieved on an application problem of interest to a carmaker, even as scientists struggle to achieve that milestone on contrived and artificial benchmarks, even as the milestone seems repeatedly to elude D-Wave itself on contrived and artificial benchmarks.  In the previous such partnerships—such as that with Lockheed Martin—we can reasonably guess that no convincing evidence for quantum supremacy was found, because if it had been, it would’ve been trumpeted from the rooftops.

Anyway, I confess that I couldn’t resist adding a tiny snark—something about how, if these claims of amazing performance were found not to withstand an examination of the details, it would not be the first time in Volkswagen’s recent history.


Farewell to a Visionary Leader—One Who Was Trash-Talking Critics on Social Media A Decade Before President Trump

This isn’t really news, but since it happened since my last D-Wave post, I figured I should share.  Apparently D-Wave’s outspoken and inimitable founder, Geordie Rose, left D-Wave to form a machine-learning startup (see D-Wave’s leadership page, where Rose is absent).  I wish Geordie the best with his new venture.


Martinis Visits UT Austin

On Feb. 22, we were privileged to have John Martinis of Google visit UT Austin for a day and give the physics colloquium.  Martinis concentrated on the quest to achieve quantum supremacy, in the near future, using sampling problems inspired by theoretical proposals such as BosonSampling and IQP, but tailored to Google’s architecture.  He elaborated on Google’s plan to build a 49-qubit device within the next few years: basically, a 7×7 square array of superconducting qubits with controllable nearest-neighbor couplings.  To a layperson, 49 qubits might sound unimpressive compared to D-Wave’s 2000—but the point is that these qubits will hopefully maintain coherence times thousands of times longer than the D-Wave qubits, and will also support arbitrary quantum computations (rather than only annealing).  Obviously I don’t know whether Google will succeed in its announced plan, but if it does, I’m very optimistic about a convincing quantum supremacy demonstration being possible with this sort of device.

Perhaps most memorably, Martinis unveiled some spectacular data, which showed near-perfect agreement between Google’s earlier 9-qubit quantum computer and the theoretical predictions for a simulation of the Hofstadter butterfly (incidentally invented by Douglas Hofstadter, of Gödel, Escher, Bach fame, when he was still a physics graduate student).  My colleague Andrew Potter explained to me that the Hofstadter butterfly can’t be used to show quantum supremacy, because it’s mathematically equivalent to a system of non-interacting fermions, and can therefore be simulated in classical polynomial time.  But it’s certainly an impressive calibration test for Google’s device.


2000 Qubits Are Easy, 50 Qubits Are Hard

Just like the Google group, IBM has also publicly set itself the ambitious goal of building a 50-qubit superconducting quantum computer in the near future (i.e., the next few years).  Here in Austin, IBM held a quantum computing session at South by Southwest, so I went—my first exposure of any kind to SXSW.  There were 10 or 15 people in the audience; the purpose of the presentation was to walk through the use of the IBM Quantum Experience in designing 5-qubit quantum circuits and submitting them first to a simulator and then to IBM’s actual superconducting device.  (To the end user, of course, the real machine differs from the simulation only in that with the former, you can see the exact effects of decoherence.)  Afterward, I chatted with the presenters, who were extremely friendly and knowledgeable, and relieved (they said) that I found nothing substantial to criticize in their summary of quantum computing.

Hope everyone had a great Pi Day and Ides of March.

Read the whole story
drpratten
6 days ago
reply
Sydney, Australia
Share this story
Delete
Next Page of Stories