Quantcast
Channel: Mark Shuttleworth » openstack
Viewing all articles
Browse latest Browse all 6

OpenStack on a diet, redux

$
0
0

Subhu writes that OpenStack’s blossoming project list comes at a cost to quality. I’d like to follow up with an even leaner approach based on an outline drafted during the OpenStack Core discussions after ODS Hong Kong, a year ago.

The key ideas in that draft are:

Only call services “core” if the user can detect them.

How the cloud is deployed or operated makes no difference to a user. We want app developers to

Define both “core” and “common” services, but require only “core” services for a cloud that calls itself OpenStack compatible.

Separation of core and common lets us recognise common practice today, while also acknowledging that many ideas we’ve had in the past year or three are just 1.0 iterations, we don’t know which of them will stick any more than one could predict which services on any major public cloud will thrive and which will vanish over time. Signalling that something is “core” means it is something we commit to keeping around a long time. Signalling something is “common” means it’s widespread practice for it to be available in an OpenStack environment, but not a requirement.

Require that “common” services can be self-deployed.

Just as you can install a library or a binary in your home directory, you can run services for yourself in a cloud. Services do not have to be provided by the cloud infrastructure provider, they can usually be run by a user themselves, under their own account, as a series of VMs providing network services. Making it a requirement that users can self-provide a service before designating it common means that users can build on it; if a particular cloud doesn’t offer it, their users can self-provide it. All this means is that the common service itself builds on core services, though it might also depend on other common services which could be self-deployed in advance of it.

Require that “common” services have a public integration test suite that can be run by any user of a cloud to evaluate conformance of a particular implementation of the service.

For example, a user might point the test suite at HP Cloud to verify that the common service there actually conforms to the service test standard. Alternatively, the user who self-provides a common service in a cloud which does not provide it can verify that their self-deployed common service is functioning correctly. This also serves to expand the test suite for the core: we can self-deploy common services and run their test suites to exercise the core more thoroughly than Tempest could.

Keep the whole set as small as possible.

We know that small is beautiful; small is cleaner, leaner, more comprehensible, more secure, easier to test, likely to be more efficiently implemented, easier to attract developer participation. In general, if something can be cut from the core specification it should. “Common” should reflect common practice and can be arbitrarily large, and also arbitrarily changed.

In the light of those ideas, I would designate the following items from Subhu’s list as core OpenStack services:

  • Keystone (without identity, nothing)
  • Nova (the basis for any other service is the ability to run processes somewhere)
    • Glance (hard to use Nova without it)
  • Neutron (where those services run)
    • Designate (DNS is a core aspect of the network)
  • Cinder (where they persist data)

I would consider these to be common OpenStack services:

  • SWIFT (widely deployed, can be self-provisioned with Cinder block backends)
  • Ceph RADOS-GW object storage (widely deployed as an implementation choice, common because it could be self-provided on Cinder block)
  • Horizon (widely deployed, but we want to encourage innovation in the dashboard)

And these I would consider neither core nor common, though some of them are clearly on track there:

  • Barbican (not widely implemented)
  • Ceilometer (internal implementation detail, can’t be common because it requires access to other parts)
  • Juju (not widely implemented)
  • Kite (not widely implemented)
  • HEAT (on track to become common if it can be self-deployed, besides, I eat controversy for breakfast)
  • MAAS (who cares how the cloud was built?)
  • Manila (not widely implemented, possibly core once solid, otherwise common once, err, common)
  • Sahara (not widely implemented, weird that we would want to hardcode one way of doing this in the project)
  • Triple-O (user doesn’t care how the cloud was deployed)
  • Trove (not widely implemented, might make it to “common” if widely deployed)
  • Tuskar (see Ironic)
  • Zaqar (not widely implemented)

In the current DefCore discussions, the “layer” idea has been introduced. My concern is simple: how many layers make sense? End users don’t want to have to figure out what lots of layers mean. If we had “OpenStack HPC” and “OpenStack Scientific” and “OpenStack Genomics” layers, that would just be confusing. Let’s keep it simple – use “common” as a layer, but be explicit that it will change to reflect common practice (of course, anything in common is self-reinforcing in that new players will defer to norms and implement common services, thereby entrenching common unless new ideas make services obsolete).


Viewing all articles
Browse latest Browse all 6

Latest Images

Trending Articles





Latest Images