Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Defining Head Coverage for rules which predict a constant #35

Open
falcaopetri opened this issue Jul 22, 2020 · 3 comments
Open

Defining Head Coverage for rules which predict a constant #35

falcaopetri opened this issue Jul 22, 2020 · 3 comments

Comments

@falcaopetri
Copy link
Contributor

Hi,

The Head Coverage has been defined as "the ratio of instantiations of the head atom that are predicted by the rule" (AMIE 3's paper). It's formula is given as:

(Equation 1)

Now, consider a rule which predicts a constant: B => ?x relation C. Since y is a constant, I thought that HC would be calculated as:

(Equation 2)

As I understood from code, HC uses MiningAssistant#getHeadCardinality which considers a fixed support for each relation. Therefore, HC is:

(Equation 3)

Also, as I understood from InstantiatedHeadMiningAssistant (which counts on one variable), HC would be:
(Equation 4)

  1. Are these observations correct?
  2. Does Equation 2 make sense? As I see, Equation 2 can be considered as Equation 3 conditioned to y=C. Therefore, both have applications, depending on how we read the associated metrics.

Regards,
Antonio.

@falcaopetri falcaopetri changed the title Question about Hea Defining Head Coverage for rules which predict a constant Jul 22, 2020
@lajus
Copy link
Contributor

lajus commented Jul 23, 2020

Hi,

Head Coverage can be related to the notion of recall of a rule and computes how well the predictions of a rule cover the set of fact with the same head relation. In this sense, the proper formulae are Equation 1 and Equation 3.

Equation 2 computes how well the predictions of your rule, the xs such as r(x, C) cover the facts of the form r(x, C) of the KB. This new measure also captures a more specific form of recall and as such should "make sense", but I would not consider it as the Head Coverage.

Indeed, AMIE supposes that the head coverage is monotonous, e.g the head coverage of a refinement of a rule must be inferior than the head coverage of the initial rule. If we consider the rule Equation 3 as a refinement of the rule Equation 1 and we change the formula used, this property is not guaranteed to hold.

TL;DR: Equation 3 is not Head Coverage but a new interesting measure.

Cheers,
Jonathan

@falcaopetri
Copy link
Contributor Author

If we consider the rule Equation 3 as a refinement of the rule Equation 1 and we change the formula used, this property is not guaranteed to hold.

TL;DR: Equation 3 is not Head Coverage but a new interesting measure.

I'm assuming that this last 2 references for "Equation 3" should read "Equation 2", right?


Thanks for the explanation, @lajus. I forgot to consider that the monotonous property was required for pruning.

I really liked your definitions in AMIE 3's paper. My only note is that Equation 1 (from my first comment) led me to falsely assume that all y's would be instantiated. This was not the case in the previous papers' definitions since they differentiated the denominator's y'. Of course, this difference only applies to instantiated heads.

Regards,
Antonio.

@lajus
Copy link
Contributor

lajus commented Jul 23, 2020

If we consider the rule Equation 3 as a refinement of the rule Equation 1 and we change the formula used, this property is not guaranteed to hold.
TL;DR: Equation 3 is not Head Coverage but a new interesting measure.

I'm assuming that this last 2 references for "Equation 3" should read "Equation 2", right?

Yes, that is right.

Jonathan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants