Skip to content

Commit

Permalink
Merge pull request #81 from arbisoft/litmustest.staging
Browse files Browse the repository at this point in the history
Litmustest.staging
  • Loading branch information
rehanedly authored Nov 11, 2022
2 parents e4eb7c7 + 024b14c commit 33824bf
Show file tree
Hide file tree
Showing 72 changed files with 2,951 additions and 489 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,5 @@ test/selenium/screenshots/*
logs/*.log*

edx-ora2/

openassessment/xblock/job_sample_grader/secret_data
17 changes: 17 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -101,3 +101,20 @@ test-sandbox: test-acceptance test-a11y
install-osx-requirements:
brew install gettext
brew link gettext --force

upgrade: export CUSTOM_COMPILE_COMMAND=make upgrade
upgrade: ## update the requirements/*.txt files with the latest packages satisfying requirements/*.in
pip install -qr requirements/pip-tools.txt
pip-compile --upgrade -o requirements/pip-tools.txt requirements/pip-tools.in
pip-compile --upgrade -o requirements/base.txt requirements/base.in
pip-compile --upgrade -o requirements/test.txt requirements/test.in
pip-compile --upgrade -o requirements/quality.txt requirements/quality.in
# Delete django pin from test requirements to avoid tox version collision
sed -i.tmp '/^[d|D]jango==/d' requirements/test.txt
sed -i.tmp '/^djangorestframework==/d' requirements/test.txt
# Delete extra metadata that causes build to fail
sed -i.tmp '/^--index-url/d' requirements/*.txt
sed -i.tmp '/^--extra-index-url/d' requirements/*.txt
sed -i.tmp '/^--trusted-host/d' requirements/*.txt
# Delete temporary files
rm requirements/*.txt.tmp
51 changes: 51 additions & 0 deletions docs/decisions/0001-show-input-file-read-code.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
**Add support to show input file read code in the editor by default**
=======================================

Status
------

*Approved*

Context
-------

- Need to add an option to select if we want to add input file read code in the editor by default or not
- Current behavior of editor:

- There is no option to see from where candidates have to read the input file in the editor

- There is no code available for reading input files

- Users need support on how to read files very frequently

- We need a solution where we can add an option in ORA settings that asks the author if the system should display input file-read code by default in the editor or not

Decisions
---------

- An option to show read input file or not

- There should be a setting in ora that can be modified from the studio for each question where the author can select if the system should display input file read code or not

- Display the default read input file code in the editor

- Default read input file code will be loaded on language changed from the drop-down in the selected language
- Refer to *Appendix A* for the example of a sample input file read code concerning language

- Default read input file code will only be displayed in the editor is empty or contains the default code of any language
- Default read input file code will only displayed if author select true for value of `show_read_input_file_code`.


Appendix A
----------

**Sample input file read code example**:

.. code-block:: JSON
{
"Python":"Default code of Python",
"NodeJS":"Default code of NodeJs",
"Java":"Default code of Java",
"C++":"Default code of C++"
}
2 changes: 1 addition & 1 deletion openassessment/assessment/admin.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import json

from django.contrib import admin
from django.core.urlresolvers import reverse_lazy
from django.urls import reverse_lazy
from django.utils import html

from openassessment.assessment.models import Assessment, AssessmentFeedback, PeerWorkflow, PeerWorkflowItem, Rubric
Expand Down
24 changes: 12 additions & 12 deletions openassessment/assessment/migrations/0001_initial.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ class Migration(migrations.Migration):
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('feedback', models.TextField(default=b'', blank=True)),
('assessment', models.ForeignKey(related_name='parts', to='assessment.Assessment')),
('assessment', models.ForeignKey(related_name='parts', to='assessment.Assessment', on_delete=models.CASCADE)),
],
),
migrations.CreateModel(
Expand All @@ -72,7 +72,7 @@ class Migration(migrations.Migration):
('name', models.CharField(max_length=100)),
('label', models.CharField(max_length=100, blank=True)),
('explanation', models.TextField(max_length=10000, blank=True)),
('criterion', models.ForeignKey(related_name='options', to='assessment.Criterion')),
('criterion', models.ForeignKey(related_name='options', to='assessment.Criterion', on_delete=models.CASCADE)),
],
options={
'ordering': ['criterion', 'order_num'],
Expand Down Expand Up @@ -102,9 +102,9 @@ class Migration(migrations.Migration):
('submission_uuid', models.CharField(max_length=128, db_index=True)),
('started_at', models.DateTimeField(default=django.utils.timezone.now, db_index=True)),
('scored', models.BooleanField(default=False)),
('assessment', models.ForeignKey(to='assessment.Assessment', null=True)),
('author', models.ForeignKey(related_name='graded_by', to='assessment.PeerWorkflow')),
('scorer', models.ForeignKey(related_name='graded', to='assessment.PeerWorkflow')),
('assessment', models.ForeignKey(to='assessment.Assessment', null=True, on_delete=models.CASCADE)),
('author', models.ForeignKey(related_name='graded_by', to='assessment.PeerWorkflow', on_delete=models.CASCADE)),
('scorer', models.ForeignKey(related_name='graded', to='assessment.PeerWorkflow', on_delete=models.CASCADE)),
],
options={
'ordering': ['started_at', 'id'],
Expand Down Expand Up @@ -147,33 +147,33 @@ class Migration(migrations.Migration):
('raw_answer', models.TextField(blank=True)),
('content_hash', models.CharField(unique=True, max_length=40, db_index=True)),
('options_selected', models.ManyToManyField(to='assessment.CriterionOption')),
('rubric', models.ForeignKey(to='assessment.Rubric')),
('rubric', models.ForeignKey(to='assessment.Rubric', on_delete=models.CASCADE)),
],
),
migrations.AddField(
model_name='studenttrainingworkflowitem',
name='training_example',
field=models.ForeignKey(to='assessment.TrainingExample'),
field=models.ForeignKey(to='assessment.TrainingExample', on_delete=models.CASCADE),
),
migrations.AddField(
model_name='studenttrainingworkflowitem',
name='workflow',
field=models.ForeignKey(related_name='items', to='assessment.StudentTrainingWorkflow'),
field=models.ForeignKey(related_name='items', to='assessment.StudentTrainingWorkflow', on_delete=models.CASCADE),
),
migrations.AddField(
model_name='criterion',
name='rubric',
field=models.ForeignKey(related_name='criteria', to='assessment.Rubric'),
field=models.ForeignKey(related_name='criteria', to='assessment.Rubric', on_delete=models.CASCADE),
),
migrations.AddField(
model_name='assessmentpart',
name='criterion',
field=models.ForeignKey(related_name='+', to='assessment.Criterion'),
field=models.ForeignKey(related_name='+', to='assessment.Criterion', on_delete=models.CASCADE),
),
migrations.AddField(
model_name='assessmentpart',
name='option',
field=models.ForeignKey(related_name='+', to='assessment.CriterionOption', null=True),
field=models.ForeignKey(related_name='+', to='assessment.CriterionOption', null=True, on_delete=models.CASCADE),
),
migrations.AddField(
model_name='assessmentfeedback',
Expand All @@ -183,7 +183,7 @@ class Migration(migrations.Migration):
migrations.AddField(
model_name='assessment',
name='rubric',
field=models.ForeignKey(to='assessment.Rubric'),
field=models.ForeignKey(to='assessment.Rubric', on_delete=models.CASCADE),
),
migrations.AlterUniqueTogether(
name='studenttrainingworkflowitem',
Expand Down
16 changes: 8 additions & 8 deletions openassessment/assessment/models/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ def content_hash_from_dict(rubric_dict):
rubric_dict.pop("content_hash", None)

canonical_form = json.dumps(rubric_dict, sort_keys=True)
return sha1(canonical_form).hexdigest()
return sha1(canonical_form.encode('utf-8')).hexdigest()

@staticmethod
def structure_hash_from_dict(rubric_dict):
Expand Down Expand Up @@ -141,7 +141,7 @@ def structure_hash_from_dict(rubric_dict):
for criterion in rubric_dict.get('criteria', [])
]
canonical_form = json.dumps(structure, sort_keys=True)
return sha1(canonical_form).hexdigest()
return sha1(canonical_form.encode('utf-8')).hexdigest()


class Criterion(models.Model):
Expand All @@ -152,7 +152,7 @@ class Criterion(models.Model):
All Rubrics have at least one Criterion.
"""
rubric = models.ForeignKey(Rubric, related_name="criteria")
rubric = models.ForeignKey(Rubric, related_name="criteria", on_delete=models.CASCADE)

# Backwards compatibility: The "name" field was formerly
# used both as a display name and as a unique identifier.
Expand Down Expand Up @@ -192,7 +192,7 @@ class CriterionOption(models.Model):
Assessment. That state is stored in :class:`AssessmentPart`.
"""
# All Criteria must have at least one CriterionOption.
criterion = models.ForeignKey(Criterion, related_name="options")
criterion = models.ForeignKey(Criterion, related_name="options", on_delete=models.CASCADE)

# 0-based order in Criterion
order_num = models.PositiveIntegerField()
Expand Down Expand Up @@ -417,7 +417,7 @@ class Assessment(models.Model):
MAX_FEEDBACK_SIZE = 1024 * 100

submission_uuid = models.CharField(max_length=128, db_index=True)
rubric = models.ForeignKey(Rubric)
rubric = models.ForeignKey(Rubric, on_delete=models.CASCADE)

scored_at = models.DateTimeField(default=now, db_index=True)
scorer_id = models.CharField(max_length=40, db_index=True)
Expand Down Expand Up @@ -613,16 +613,16 @@ class AssessmentPart(models.Model):
"""
MAX_FEEDBACK_SIZE = 1024 * 100

assessment = models.ForeignKey(Assessment, related_name='parts')
assessment = models.ForeignKey(Assessment, related_name='parts', on_delete=models.CASCADE)

# Assessment parts are usually associated with an option
# (representing the point value selected for a particular criterion)
# It's possible, however, for an assessment part to contain
# only written feedback, with no point value.
# In this case, the assessment part is associated with a criterion,
# but not with any option (the `option` field is set to null).
criterion = models.ForeignKey(Criterion, related_name="+")
option = models.ForeignKey(CriterionOption, null=True, related_name="+")
criterion = models.ForeignKey(Criterion, related_name="+", on_delete=models.CASCADE)
option = models.ForeignKey(CriterionOption, null=True, related_name="+", on_delete=models.CASCADE)

# Free-form text feedback for the specific criterion
# Note that the `Assessment` model also has a feedback field,
Expand Down
6 changes: 3 additions & 3 deletions openassessment/assessment/models/peer.py
Original file line number Diff line number Diff line change
Expand Up @@ -440,11 +440,11 @@ class PeerWorkflowItem(models.Model):
assessment represents the completed assessment for this work item.
"""
scorer = models.ForeignKey(PeerWorkflow, related_name='graded')
author = models.ForeignKey(PeerWorkflow, related_name='graded_by')
scorer = models.ForeignKey(PeerWorkflow, related_name='graded', on_delete=models.CASCADE)
author = models.ForeignKey(PeerWorkflow, related_name='graded_by', on_delete=models.CASCADE)
submission_uuid = models.CharField(max_length=128, db_index=True)
started_at = models.DateTimeField(default=now, db_index=True)
assessment = models.ForeignKey(Assessment, null=True)
assessment = models.ForeignKey(Assessment, null=True, on_delete=models.CASCADE)

# This WorkflowItem was used to determine the final score for the Workflow.
scored = models.BooleanField(default=False)
Expand Down
4 changes: 2 additions & 2 deletions openassessment/assessment/models/student_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,11 +187,11 @@ class StudentTrainingWorkflowItem(models.Model):
if there are no examples left, the student has
successfully completed training.
"""
workflow = models.ForeignKey(StudentTrainingWorkflow, related_name="items")
workflow = models.ForeignKey(StudentTrainingWorkflow, related_name="items", on_delete=models.CASCADE)
order_num = models.PositiveIntegerField()
started_at = models.DateTimeField(auto_now_add=True)
completed_at = models.DateTimeField(default=None, null=True)
training_example = models.ForeignKey(TrainingExample)
training_example = models.ForeignKey(TrainingExample, on_delete=models.CASCADE)

class Meta:
app_label = "assessment"
Expand Down
4 changes: 2 additions & 2 deletions openassessment/assessment/models/training.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ class TrainingExample(models.Model):
# The answer (JSON-serialized)
raw_answer = models.TextField(blank=True)

rubric = models.ForeignKey(Rubric)
rubric = models.ForeignKey(Rubric, on_delete=models.CASCADE)

# Use a m2m to avoid changing the criterion option
options_selected = models.ManyToManyField(CriterionOption)
Expand Down Expand Up @@ -137,7 +137,7 @@ def calculate_hash(answer, options_selected, rubric):
'options_selected': options_selected,
'rubric': rubric.id
})
return sha1(contents).hexdigest()
return sha1(contents.encode('utf-8')).hexdigest()

@classmethod
def cache_key(cls, answer, options_selected, rubric):
Expand Down
2 changes: 1 addition & 1 deletion openassessment/fileupload/backends/django_storage.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

from django.core.files.base import ContentFile
from django.core.files.storage import default_storage
from django.core.urlresolvers import reverse
from django.urls import reverse

from .base import BaseBackend

Expand Down
2 changes: 1 addition & 1 deletion openassessment/fileupload/backends/filesystem.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

from django.conf import settings
import django.core.cache
from django.core.urlresolvers import reverse
from django.urls import reverse
from django.utils.encoding import smart_text

from .. import exceptions
Expand Down
2 changes: 1 addition & 1 deletion openassessment/fileupload/tests/test_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

from django.conf import settings
from django.contrib.auth import get_user_model
from django.core.urlresolvers import reverse_lazy
from django.urls import reverse_lazy
from django.test import TestCase
from django.test.utils import override_settings

Expand Down
67 changes: 67 additions & 0 deletions openassessment/templates/openassessmentblock/edit/oa_edit.html
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,73 @@ <h2 class="openassessment_alert_title">{% trans "Rubric Change Impacts Settings
</div>
<p class="setting-help">{% trans "The display name for this component." %}</p>
</li>
<li class="openassessment_date_editor field comp-setting-entry">
<div class="wrapper-comp-setting">
<label
for="openassessment_labels_editor"
class="setting-label">
{% trans "Labels" %}
</label>
<input
type="text"
class="input setting-input"
id="openassessment_labels_editor"
value="{{ labels }}"
>
</div>
<p class="setting-help">
{% trans "A comma-separated list of label strings that can be used to categorize a problem. For example, oop, problem solving, dsa, etc." %}
<br /> {% trans "Rules:" %}
<br /> - {% trans "Only lower case alphabets [a-z]." %}
<br /> - {% trans "Special character &lt;space&gt; is allowed." %}
<br /> - {% trans "Use \",\" to separate labels." %}
</p>
</li>
<li class="field comp-setting-entry">
<div class="wrapper-comp-setting">
<label for="openassessment_show_private_test_case_results_editor" class="setting-label">
{% trans "Show Private Test Case Results"%}
</label>
<select id="openassessment_show_private_test_case_results_editor" class="input setting-input">
<option value="0">{% trans "False"%}</option>
<option value="1" {% if show_private_test_case_results %} selected="true" {% endif %}>
{% trans "True"%}
</option>
</select>
</div>
<p class="setting-help">
{% trans "Indicates whether or not to show private test case results. This only shows if the case passed or failed, it does not show any values."%}
</p>
</li>
<li id="openassessment_executor_wrapper" class="field comp-setting-entry">
<div class="wrapper-comp-setting">
<label for="openassessment_executor" class="setting-label">{% trans "Code Executor"%}</label>
<select id="openassessment_executor" class="input setting-input" name="executor">
{% for option_key, option_name in code_executor_options.items %}
<option value="{{ option_key }}" {% if option_key == executor %} selected="true" {% endif %}>{{ option_name }}</option>
{% endfor %}
</select>
</div>
<p class="setting-help">
{% trans "Choose which code executor to use for this question." %}
</p>
</li>
<li class="field comp-setting-entry">
<div class="wrapper-comp-setting">
<label for="openassessment_show_file_read_code_editor" class="setting-label">
{% trans "Show File Read Code"%}
</label>
<select id="openassessment_show_file_read_code_editor" class="input setting-input">
<option value="0">{% trans "False"%}</option>
<option value="1" {% if show_file_read_code %} selected="true" {% endif %}>
{% trans "True"%}
</option>
</select>
</div>
<p class="setting-help">
{% trans "Indicates whether or not to show file read code."%}
</p>
</li>
<li class="openassessment_date_editor field comp-setting-entry">
<div class="wrapper-comp-setting">
<label
Expand Down
Loading

0 comments on commit 33824bf

Please sign in to comment.