Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial stab jobspec language definition #53

Merged
merged 2 commits into from
Jul 27, 2016

Conversation

grondo
Copy link
Contributor

@grondo grondo commented Jul 15, 2016

Based on our meeting yesterday I took an initial stab at fleshing out some of the Jobspec Language Definition section. I'm not too happy with what's here, but maybe this can generate some discussion and we can get an initial version merged today?

@grondo
Copy link
Contributor Author

grondo commented Jul 15, 2016

It occurs to me that count for a task cannot be required. There needs to be some way to specify in canonical job spec that the number of tasks actually executed is a function of the number of task slots actually allocated when a range is given.

@trws
Copy link
Member

trws commented Jul 18, 2016

  • resource spec
    • each resource vertex shall contain:
      • id: integer vertex id
      • uuid: globally unique resource id
      • name: string, - or user-specified pattern
      • logical_id: id in local neighborhood
      • system_id: id in local neighborhood
      • type
      • amount:
      • unit:
      • a list of edges where edge has:
        • direction: in, out or inout
        • from-id: id or uuid
        • to-id: id or uuid
        • type: link-type, or graph type
        • attributes:
      • executable: boolean
      • attributes
  • resource query: abstract description of a resource spec
    • each resource descriptor shall contain:
      • type
      • sharing: exclusive, shared
      • count:
        • min
        • max
        • operator
        • operand
      • unit
      • a list of edge matchers
  • program spec
    • at least one resource, conforming to resource spec
    • at least one task with:
      • command: string or list
      • count_per_slot OR total: one or other, not both
      • total
      • distribution: policy?

@grondo
Copy link
Contributor Author

grondo commented Jul 19, 2016

Ok, I pushed a new Jobspec Language Definition section which I hope captures the result of yesterday's meeting, and @trws comments above. This version is still pretty rough, but I'd like to narrow down on something acceptable to merge soon.

@trws, given the above, I'm still assuming a conformant jobspec "SHALL" be a dictionary consisting of the keys resources, tasks, walltime, and attrs. It seems like that still must hold, though I can't remember if that is required in your test parser/digester.

@SteVwonder
Copy link
Member

@grondo: looks good!

My only confusion is with the id key. My understanding is that this key means two different things when in a query vs in a resource description. In a query, it is used as a label that can be referenced later on in the query (i.e., when defining where a task should execute). In a resource description, it is used as a unique identifier for a given resource and is used as a reference by Flux modules (e.g., resrc, wreck, etc) within that Flux instance. Is this correct?

@grondo
Copy link
Contributor Author

grondo commented Jul 19, 2016

My understanding is that this key means two different things when in a query vs in a resource description. In a query, it is used as a label that can be referenced later on in the query (i.e., when defining where a task should execute).

Yeah, jobspec only deals with query so I think we're only talking about the use as a label. (The other use of id is detailed in RFC4 I think). Maybe it would be better to rename id to label for the jobspec?

@trws
Copy link
Member

trws commented Jul 19, 2016

I like that to help clarify the difference. There may be a time when using an id or a UUID in a query is appropriate, if someone is requesting a specific resource by id or something. I like how this is looking, and given our discussions the other day I agree that requiring resources and tasks as keys at the program level is the way to go. Programs, tasks, and resources are distinct types now, and the "slot" resource type serves as the task target in the resource specification.

My parser does not currently work that way, but it will make life noticeably easier once I make it work that way.

@lipari
Copy link
Contributor

lipari commented Jul 19, 2016

@grondo, the resource id term is overloaded. As it is written in your PR, it refers to a resource slot_id and this differs from the resource ID attribute described in RFC 4. The task slot would reference the resource slot_id. That's what I recall anyway.

@lipari
Copy link
Contributor

lipari commented Jul 19, 2016

Sorry... I glossed over the earlier comments before lunch. I don't mean to rehash what's been said.

@grondo
Copy link
Contributor Author

grondo commented Jul 19, 2016

Programs, tasks, and resources are distinct types now, and the "slot" resource type serves as the task target in the resource specification.

Hm, I perhaps wasn't clear on this particular point from yesterday. Are you saying that a "slot" is now a full type, e.g. type: slot, and not just defined by giving a resource vertex a "label", e.g.

resources:
 - type: core
   label: default
   #  other keys left out for brevity
tasks:
  - command: myapp
    slot: default

vs

resources:
  - type: slot
    label: default
    with:
        - type: core
          # ...
tasks:
  - command: myapp
    slot: default

Either way is fine with me actually, though the type: slot syntax is more verbose, it does seem more clear, and we'd need to group resources in many real-world scenarios in any case, e.g. you probably don't want a core without memory, so the canonical request for a core (using the group syntax) would likely turn into

resources:
  - type: group
    label: default
    count: { min: 1, max: 1, operator: "+", operand: 1 }
    with:
      - type: core
        count: { min: 1, max: 1, operator: "+", operand: 1 }
        sharing: exclusive
      - type: memory
        count: { min: Xmb_per_core, max: Xmb_per_core, operator: "+", operand: 1 }
        units: "MB",
        sharing: exclusive
tasks:
  - command: myapp
    slot: default
    count_per_slot: 1
    distribution: default
    attrs: {}

@garlick
Copy link
Member

garlick commented Jul 19, 2016

Formatting comment: consider using capitalized subheadings for top level keys. The indentation is a bit subtle in the github rendering with the current approach.

@grondo
Copy link
Contributor Author

grondo commented Jul 19, 2016

Formatting comment: consider using capitalized subheadings for top level keys.

Thanks, I assume you are talking about the RFC not the yaml snippets? I think I noticed that about the document as well, however I was hesitant to change the case of top level jobspec keys just for the RFC. I don't actually like the way the whole thing in the RFC is presented and would be open to comments on a different way to present the constraints... a table perhaps?

@trws
Copy link
Member

trws commented Jul 19, 2016

Table, nested bullets so we have bullet style, or similar might be practical. I'm thinking we might add an actual schema based on something like the Rx yaml/json schema setup also, since that makes it trivial to verify the structure, if not the semantic content, of a spec.

@grondo
Copy link
Contributor Author

grondo commented Jul 19, 2016

Unfortunately it appears Github rendering does not indent definition lists. I've changed formatting as @garlick suggested, but I'm afraid the indentation is still a bit subtle, but hopefully enough of an improvement that we can parse the content and review that part of the doc.

Sadly, when I was formatting I mixed in the changes from id -> label, so that change snuck in there as well.

However, @trws, the current language in this doc allows any resource vertex to be labeled as a slot with the label key, and I'm not entirely sure that is what you wanted, see comment above.

I like the idea of formalizing a schema here as well -- If we do it as JSON, we could use the JSON content rules we've used elsewhere (as explained to me by @garlick).

@garlick
Copy link
Member

garlick commented Jul 19, 2016

That looks quite a bit more readable to me :-)

Since a min/max count of a resource can be requested, does wallclock need to be expressible as a function of the actual quantity of resources allocated? (maybe using a label like with slots?)

@grondo
Copy link
Contributor Author

grondo commented Jul 19, 2016

Since a min/max count of a resource can be requested, does wallclock need to be expressible as a function of the actual quantity of resources allocated? (maybe using a label like with slots?)

Great point! Perhaps this was already discussed but perhaps it would be sufficient for now to say walltime is also a dict with at least one supported key, where the first key we support is duration or something similar. Extra keys could be reserved for future use (for instance a function, or a list of labels, etc.)

Your comment also reminds me that we should ensure we have some way in the future for users to be able to submit a list of resource descriptors that represent a set of alternatives (|| vs &&). I thought perhaps @trws had an idea of how to do that with edges, but I don't quite fully grok how that would work. If we need to add an extra level to the jobspec, now is when we should do it. Like walltime we can define parts of the spec as a dict with only one supported key, with other keys reserved for future expansion.

@dongahn
Copy link
Member

dongahn commented Jul 19, 2016

Great point! Perhaps this was already discussed but perhaps it would be sufficient for now to say walltime is also a dict with at least one supported key, where the first key we support is duration or something similar. Extra keys could be reserved for future use (for instance a function, or a list of labels, etc.)

I agree, this is a good point. This should be important for expressing moldable jobs.

A simple way to express a walltime as a function of allocated quantity may be to use a simple resource scaling factor?

Specifically, users can either specify a constant walltime regardless of resource quantities allocated or as a function of the scaling factor with some adjustment constant:

walltime request for maximum resource x scaling (where scaling is defined to be maximum / allocated resource quantity) x C (adjustment constant)

@dongahn
Copy link
Member

dongahn commented Jul 20, 2016

BTW, I just noticed this in one of the resource vertex fields: system_id: id in local neighborhood

I think this meant to be a system-defined id?

It would also be good if we can pencil in the purpose of having two id space as well.

@grondo
Copy link
Contributor Author

grondo commented Jul 20, 2016

@dongahn, I don't think we should conflate the "resource spec" or resource description which is currently defined in RFC4, and the jobspec "resource query spec" which is what we are defining in this RFC.

This RFC now uses "label" instead of "id" to denote task slot labels which can then be referred to in other parts of the jobspec, most notably task specifications.

As @trws points out, we will need to support id logical_id tag etc matching in the resource query spec, and that isn't there yet, but generic resources will have generic properties and attributes so those might fall under a properties key or other extensible component, and each property or attribute may have potentially different rules for matching. (e.g. id could take a nodeset style list, tags might be a syntax tree or string representing a query "(griffin or bear) and goshawk and not slow"). Maybe we can use the same rules for the various potential missing resource components?

@grondo
Copy link
Contributor Author

grondo commented Jul 20, 2016

@dongahn: Sorry I just realized you were correcting a typo above, and not necessarily talking about adding system_id or logical_id to this RFC. Apologies! and sorry for my tangential diatribe!

@dongahn
Copy link
Member

dongahn commented Jul 20, 2016

@grondo: yes the last comment was a typo correction. Sorry, I should have inlined my comment with the original posting.

@lipari
Copy link
Contributor

lipari commented Jul 20, 2016

Three misc comments:

  • walltime, like a count, could be a range
  • The concept of a task slot of resources needs better definition in the Resources section. Specifically we need to state whether all the resources described in the resource section is one task slot, or whether it is a aggregation of multiple task slots, or whether multiple task slots can be defined under a single resources key.
  • We could theoretically support "count_per_slot AND total" tasks. Do we want to?

@dongahn
Copy link
Member

dongahn commented Jul 20, 2016

@lipari, good points. I think 'walltime' as a range support the same concept as the scale fator-based spec I sugested, if i understand you right. One is just more explicit than the other.

So as far as we are clear on what a walltime range means in the spec and covers the scale factor case, I think this is a good idea.

@trws
Copy link
Member

trws commented Jul 20, 2016

Using a range for walltime, that matches or exceeds the number of levels allowed by the corresponding resource ranges, as @lipari suggests was also the idea that Suraj, @tpatki and I settled on when hashing over the walltime issue last summer. I think that's the easiest way to apply it in the short term, and we can always extend it later.

As to supporting both per and total, we certainly can, but we need to be clear about which overrides the other. I would probably say that total is the maximum, regardless of "per_slot," but either way it would need to be explicit.

@lipari
Copy link
Contributor

lipari commented Jul 20, 2016

@dongahn, I see two approaches to consider and I was inserting the first:

  • Give me X resources for anywhere between m to n minutes.
  • Give me X resources for n minutes or (X / 4) resources for (n * 4) minutes

@trws
Copy link
Member

trws commented Jul 20, 2016

As @grondo points out, RFC4 defines the actual resource hierarchy spec, and we have diverged in a couple of places. It might be good to reconcile some of these, since this is effectively defining the find and/or match components of RFC4 in terms of an RFC4 resource graph. The resource spec syntax, or what goes into the "resources" key, could also be considered a valid format for a serialization of at least an abstract resource graph as defined in RFC4, though it might need a couple of tweaks.

The only things that jump out at me:

  • 4 requires basename, name and id such that name is basename-id
  • tags exist on pools and resources, these are valueless attributes, so they could be modeled, but currently are not in any explicit way
  • properties == attributes, we can just change the name of the key or say they're the same and this one works
  • Size and allocation table: size we talked about, or count/amount/unit for requests, we have also discussed having the allocations represented by incoming edges that record the amount consumed by the edge, so as not to maintain another list, but this doesn't map 1-1 just yet
  • Hierarchy table, this is encoded in incoming/outgoing edges and edge types in the language we've been using for this RFC, again it's the same information but we might want to normalize the language

@grondo
Copy link
Contributor Author

grondo commented Jul 20, 2016

Give me X resources for n minutes or (X / 4) resources for (n * 4) minutes

I think this was the case @garlick had mentioned above. I had suggested, for extensibility, we promote walltime key to dictionary with each key supporting a different method for communicating time limit, e.g. strict duration, range similar to resource count range, or function as a function of task slot count or other generic field of actual resources allocated. If there is some future method for communicating walltime someone can just add a new key, instead of modifying the definition of the walltime value.

@trws
Copy link
Member

trws commented Jul 20, 2016

I like the idea of making it a dict so it's easier to extend later. Especially to support a user-defined function or expression embedded in it.

@dongahn
Copy link
Member

dongahn commented Jul 20, 2016

@grondo: making it extensible makes sense to me. I think duration, range and function already covers a wide range of cases.

@grondo
Copy link
Contributor Author

grondo commented Jul 20, 2016

4 requires basename, name and id such that name is basename-id

Actually, it should not require basename and id, basename is is optional and defaults to the type name (e.g. "core" for a "core", but a node can have a basename of "hype" for instance), name defaults to "${basename}${id}" but can be overidden with an explicit name.

We should have a way in the jobspec language to request (and exclude!) a set of resource by id or name.

properties == attributes, we can just change the name of the key or say they're the same and this one works

The important distinction in RFC4 is that properties are a shared attribute of a common resource type, inherited by all instances, and attributes/tags are specific to an instance of a resource type. For the query language that distinction probably does not matter. (Perhaps that is what you were saying above.)

In general, I like the direction @trws is going with normalizing the RFC4 resource hierarchy terms and the jobspec resource spec. This is actually getting quite close to one of our original goals of making the language used for resource queries and resource configuration the same (borrowed from ClassAd). We should be able to define "matcher" for each of the components of a hierarchical resource and make this part of the resource query language. (e.g. we should be able to query against id, basename, name or any defined resource component) In fact, a resource definition in this language (e.g. the serialized version proposed by @trws, should also be a valid match for that exact resource)

Using this spec as the serialization language seems like a very good idea, and it would be interesting to explore that after we've settled on the basic components here (e.,g. merge this PR) ;-)

(Sorry if I kind of got off on a tangent here)

@dongahn
Copy link
Member

dongahn commented Jul 20, 2016

BTW at the expense of making noise, i sort of see commonality between extensible walltime spec and task shape spec.

Do we want to make it so that task count/shape spec extensibl also? Right now we support task per slot and total bur later we might want to extend it to cover some odd shapes? A task counts list or some mapping function etc which conventional distribution policy and count cannot easily create the shape?

To be clear, i am not suggesting to specify these now to make it extensible for later use.

@lipari
Copy link
Contributor

lipari commented Jul 25, 2016

Does that make sense?

I believe it be more understandable to me anyway if you would provide the YAML slot definitions for each of the two scenarios: a request for an exclusive allocation of a node that has at least two sockets and a request for two sockets on the same node.

@trws
Copy link
Member

trws commented Jul 25, 2016

a request for an exclusive allocation of node that has at least two sockets

type: slot
with:
  type: node
    type: socket
    count: 2

a request for two sockets on the same node

type: node
  type: slot
  with:
    type: socket
    count: 2

@grondo
Copy link
Contributor Author

grondo commented Jul 25, 2016

I agree, it's definitely clumsy. The one part I'm not sure about is how we should handle resources exclusively allocated to the job, but not to an individual task slot. Maybe allow extra slot vertices that aren't themselves actually used with tasks directly to set exclusivity farther up the tree?

Good point. Your idea here works pretty well, however I fear use of type: slot here would be confusing to users and could cause problems down the road. I do not have a better idea though, but maybe this is a good case to bring back type: group as an unlabeled slot? Alternately, it is good reason to keep an optional sharing keyword, though I'd say it would be more clear if it was an optional exclusive boolean placed on a vertex (and obviously inherited by with: type children).

@trws
Copy link
Member

trws commented Jul 25, 2016

I like the optional exclusive boolean idea, it shouldn't be needed very often, and if it is, it will get set.

@grondo
Copy link
Contributor Author

grondo commented Jul 25, 2016

@trws's case of resources exclusively allocated to a job but not part of any task slot makes a great use case. However, I'm having trouble coming up with a non-trivial example, like a license which is obviously exclusively allocated since it will be a leaf in the request graph.

Any ideas of a use case requesting a hierarchical resource exclusively, but not part of any task slot? (I guess a node that doesn't run anything is an example, but anything better?)

@trws
Copy link
Member

trws commented Jul 25, 2016

@grondo, if we go by the resources under a slot are exclusive, then we wouldn't have the exclusive leaf resources anymore necessarily. A license is actually the best case I can immediately think of. Alternately, a user might want to exclusively allocate every node they run on, but confine their task slot to the socket level.

@grondo
Copy link
Contributor Author

grondo commented Jul 25, 2016

Good point, shared/exclusive and the slot are really two different things, and I agree it does seem we need both. The slight changes we've kind of agreed on so far are indeed making it less awkward for me.

Let me make sure I've got the rules correct:

  • Optional exclusive: true setting on resource marks that resource and all with: children as required to be exclusively allocated.
  • type: slot automatically assume exclusive: true
  • Leaf vertices in the request graph automatically assume exclusive: true

That works for me, but I realize the last point might still be in question. For me, it doesn't make sense to request a resource "shared" without saying what portion of that resource you actually need to allocate.

@trws
Copy link
Member

trws commented Jul 25, 2016

I agree with the first two, the last I'm less sure. That said I'm not sure I have a good counter-example. The only thing that's bugging at me is it seems possible, despite my present inability to come up with an example, that a user would want to request that the structure of the graph contain something at a leaf location that they are not actually allocating. Actually, maybe that's the issue. If it's limited to leaves of the with:-link-based hardware tree, it seems to work, but if it's leaves of the request graph then it would become problematic?

@grondo
Copy link
Contributor Author

grondo commented Jul 25, 2016

Actually, maybe that's the issue. If it's limited to leaves of the with:-link-based hardware tree, it seems to work, but if it's leaves of the request graph then it would become problematic?

That is what I was thinking, too. My rules above only apply to the with: tree. We haven't fully specified how a generic graph query would look, so I'm having trouble wrapping my head around one at all.

My problem with not requiring leaf vertex in with: tree to be exclusive is that it does seem like the system has to "fill in" the missing children when they are unspecified, and the parent is "shared". E.g. a request for a node "shared" makes no sense, as you have to allocate something from that node, whether it is one core and some RAM, or 1 "compute share" -- and that should be reflected in the canonical jobspec IMO.

However, I realize it is taking me awhile to get some of this stuff, so I'm willing to admit I may be wrong.

@trws
Copy link
Member

trws commented Jul 25, 2016

That's actually not quite how I see it. If the thing is shared, it has
to exist, but nothing from it is actually allocated at all. It serves
as a requirement on the system structure, but contributes nothing to the
list of things that the scheduler actually has to consider for
allocation. Perhaps this is actually a third state we haven't actually
talked about where resources that are shared but partially consumed,
like pools of memory for example or nodes allocated shared, are distinct
from resources where the allocation is just none. That's what I'm
thinking of when I think of shared resources that are leaves outside a
slot, they're relevant to the query as structural requirements, but
ignored for allocation purposes.

On 25 Jul 2016, at 9:35, Mark Grondona wrote:

Actually, maybe that's the issue. If it's limited to leaves of the
with:-link-based hardware tree, it seems to work, but if it's leaves
of the request graph then it would become problematic?

That is what I was thinking, too. My rules above only apply to the
with: tree. We haven't fully specified how a generic graph query
would look, so I'm having trouble wrapping my head around one at all.

My problem with not requiring leaf vertex in with: tree to be
exclusive is that it does seem like the system has to "fill in" the
missing children when they are unspecified, and the parent is
"shared". E.g. a request for a node "shared" makes no sense, as you
have to allocate something from that node, whether it is one core
and some RAM, or 1 "compute share" -- and that should be reflected in
the canonical jobspec IMO.

However, I realize it is taking me awhile to get some of this stuff,
so I'm willing to admit I may be wrong.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub:
#53 (comment)

@grondo
Copy link
Contributor Author

grondo commented Jul 25, 2016

Ah, I think your conceptual view of resources differs from mine. In my mental model (of the with hierarchy at least), every resource that can be allocated is a composite. So a node is a composite of its memory, sockets, cores, gpus, etc, if any of these children are allocated or consumed, the node itself is partially consumed, as is the cluster which is a composite of all its nodes.

Your mental model is more what I was thinking of for something like the "topology" graph, which as you say could have entities in it which are non-allocatable, and which could affect the structure or relation of the resulting allocated resources -- but I would hesitate to call these things "resources" if they cannot be allocated by the resource manager.

So I do agree with you that a leaf vertex in the topology or other strucutural matching case does not imply exclusive. However, if the resource is non-allocatable then the exclusivity of leaves could just be ignored?

Also, I still wonder if a request for a single shared node by itself makes any sense, as the request has no "shape".

@trws
Copy link
Member

trws commented Jul 25, 2016 via email

@tpatki
Copy link
Member

tpatki commented Jul 25, 2016

@grondo
I'm not sure I'm following. Why doesn't a request for a single shared node by itself make sense?

As a user, there can be a scenario when you want to allocate say a core/set of cores to yourself because you don't need the full node, right?

I think the question is what does the default graph look like if someone doesn't specify a "with" clause (one core and some memory?)? Or making a "with" clause mandatory in that scenario as @trws suggested.

@grondo
Copy link
Contributor Author

grondo commented Jul 25, 2016

Maybe a good way to deal with this is to say that a slot must have at
least one "with:" child, and that's what's exclusively allocated. That
way, there's no way to request a single shared anything without at least
one exclusive resource?

That works for me, and actually seems similar to what we were discussing before.

@grondo
Copy link
Contributor Author

grondo commented Jul 25, 2016

I'm not sure I'm following. Why doesn't a request for a single shared node by itself make sense?

As a user, there can be a scenario when you want to allocate say a core/set of cores to yourself because you don't need the full node, right?

I think the question is what does the default graph look like if someone doesn't specify a "with" clause (one core and some memory?)? Or making a "with" clause mandatory in that scenario as @trws suggested.

Yes, I think we're saying the same thing. It may have been incorrect to say it "doesn't make sense", but rather a request for one shared node alone is incomplete, and therefore should not be allowed in canonical jobspec.

@tpatki
Copy link
Member

tpatki commented Jul 25, 2016

Another question: what if the user gives a request that can't be translated successfully into a "shape"? Do we have a mechanism for addressing this yet -- do we just throw an error or do we default to a basic setup (for example, if you request a shared node and forget the with clause)?

I don't know how SLURM or current resource managers address this.

@trws
Copy link
Member

trws commented Jul 25, 2016

@tpatki It depends on the type of issue. If what they specify doesn't follow the spec, it will get rejected with an error by the parser. If it's something we can determine can't be supplied as part of the resources that can reasonably be made available, they'll get an error or a long-waiting job depending on policy.

@grondo
Copy link
Contributor Author

grondo commented Jul 27, 2016

OK, in this PR I've update the description of the tasks slot key to allow either label (explicit task slot) or level (implicit task slot) as suggested by @trws. I quickly updated the example, and actually added another simple example using the implicit task slot.

@grondo
Copy link
Contributor Author

grondo commented Jul 27, 2016

If we're getting close to ready to merge this, I can squash down all the incremental work. I think maybe there is a lot of editing to do, and the spec language definition could be presented better, but perhaps we could do that as a future PR

@trws
Copy link
Member

trws commented Jul 27, 2016

I'm good with that. It seems like we're at the point where it doesn't really make sense to do much more tweaking without trying it out and seeing what happens.

grondo added 2 commits July 27, 2016 11:15
Take an initial stab at defining the version 1 jobspec language,
including a sample of JSON Content Rules summarizing the requirements.
@grondo grondo force-pushed the jobspec-definition branch from f178e64 to 5007b18 Compare July 27, 2016 18:15
@grondo
Copy link
Contributor Author

grondo commented Jul 27, 2016

Ok, squashed!

@trws trws merged commit ee7336d into flux-framework:master Jul 27, 2016
@trws
Copy link
Member

trws commented Jul 27, 2016

Very nice, thanks for all the hard work on this @grondo!

@grondo
Copy link
Contributor Author

grondo commented Jul 27, 2016

Thanks!!

On Wed, Jul 27, 2016 at 11:20 AM, Tom Scogland [email protected]
wrote:

Very nice, thanks for all the hard work on this @grondo
https://github.com/grondo!


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#53 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAtSUjfHMGodX2mm3KOWqyLrJdDSNY4Eks5qZ6FngaJpZM4JNl1i
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants