Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documenting advanced functionality #4

Open
nschneid opened this issue Jun 4, 2016 · 4 comments
Open

Documenting advanced functionality #4

nschneid opened this issue Jun 4, 2016 · 4 comments

Comments

@nschneid
Copy link
Contributor

nschneid commented Jun 4, 2016

Moving from #3:

Exo (exercise) modes:

  • teacher visible mode: dumb exercise where students have to copy the teacher's tree which is visible but not directly modifiable. a good start with 3 sentences or so.
  • no feedback: the student can't see the teacher's tree and gets no feedback, but the admin can export the results of the students' annotations compared to the teacher trees
  • percentage: when students save, they can see how many percent they got wrong of dependencies and pos, but they don't know where
  • graphical feedback: when students save, they can see where there are problems compared to the teacher's tree and they have to find the right annotation.

baseAnnotatorName = parser

baseAnnotatorName is no longer used. to make it clearer that this is important only for exercise modes the variable is called exoBaseAnnotatorName. it is the name of the user that provides the tree that the student should start with.
teacher is used for comparisons and results in exercises.

I assigned the entire sample to the user. Is there a way to assign only particular sentences?

yes, but not from the graphical interface. there's code to intelligently distribute new sentences mixed up with already annotated sentences in a way that two students don't have the same set of sentences to annotate. this is used to class-source dependency annotation...

@nschneid
Copy link
Contributor Author

nschneid commented Jun 4, 2016

Cool, I was wondering how to upload a dataset with gold standard annotations hidden from annotators so that I can measure their accuracy. The way to do that is to upload the dataset and then set teacher to be the same value as importAnnotatorName?

@kimgerdes
Copy link
Member

yes. and choose the exercise mode "no feedback"

@kimgerdes
Copy link
Member

did this work as expected?

@MagaliDuran
Copy link

We are using the Arborator-Grew, exercise mode, to train annotators. The facilities of the tool are excellent!
However, we had difficulty deducing that the exercise levels (1 to 4) available in the tool correspond to the exercise modes (teacher visible mode, no feedback, percentage, graphical feedback) described in the documentation. So, we kindly asked you make this information clear on the project wiki page: https://github.com/Arborator/arborator-server/wiki

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants