Download the queue.cpp file from moodle.






















Version 3. In upgrading from Version 3. This should not change the behaviour of the question provided the two templates are consistent in the sense that running any test in the per-test template yields exactly the same result as running that same test all by itself in the combinator template. CodeRunner requires two separate plug-ins, one for the question type and one for the specialised adaptive behaviour.

The plug-ins are in two different github repositories: github. Install the two plugs using one of the following two methods. Get the code using git by running the following commands in the top level folder of your Moodle install:. Either way you may also need to change the ownership and access rights to ensure the directory and its contents are readable by the webserver.

You can then complete the installation by logging onto the server through the web interface as an administrator and following the prompts to upgrade the database as appropriate.

In its initial configuration, CodeRunner is set to use a University of Canterbury Jobe server to run jobs. You are welcome to use this during initial testing, but it is not intended for production use. Authentication and authorisation on that server is via an API-key and the default API-key given with CodeRunner imposes a limit of per hour over all clients using that key, worldwide.

If you decide that CodeRunner is useful to you, please set up your own Jobe sandbox as described in Sandbox configuration below. Alternatively, if you wish to continue to use our Jobe server, you can apply to the principal developer for your own API key, stating how long you will need to use the key and a reasonable upper bound on the number of jobs you will need to submit per hour.

We will do our best to accommodate you if we have sufficient capacity. Do not touch those special questions until you have read this entire manual and are familiar with the inner workings of CodeRunner. Even then, you should proceed with caution. These prototypes are not for normal use - they are akin to base classes in a prototypal inheritance system like JavaScript's. If you duplicate a prototype question the question type will become unusable, as CodeRunner doesn't know which version of the prototype to use.

Once you have installed the CodeRunner question type, you should be able to run CodeRunner questions using the University of Canterbury's Jobe Server as a sandbox. It is recommended that you do this before proceeding to install and configure your own sandbox. Using the standard Moodle web interface, either as a Moodle administrator or as a teacher in a course you have set up, go to the Question Bank and try creating a new CodeRunner question. A simple Python3 test question is: "Write a function sqr that returns the square of its parameter n.

The introductory quick-start guide in the incomplete Question Authoring Guide gives step-by-step instructions for creating such a question. Alternatively you can just try to create a question using the on-line help in the question authoring form. Test cases for the question might be:.

You could check the 'UseAsExample' checkbox on the first two which results in the student seeing a simple "For example" table and perhaps make the last case a hidden test case. It is recommended that all questions have at least one hidden test case to prevent students synthesising code that works just for the known test cases. These contains most of the questions from the two tutorial quizzes on the demo site. If you wish to run the questions in the file python3demoquestions.

Although CodeRunner has a flexible architecture that supports various different ways of running student task in a protected "sandboxed" environment, only one sandbox - the Jobe sandbox - is supported by the current version. This sandbox makes use of a separate server, developed specifically for use by CodeRunner, called Jobe. As explained at the end of the section on installing CodeRunner from scratch, the initial configuration uses the Jobe server at the University of Canterbury.

This is not suitable for production use. Please switch to using your own Jobe server as soon as possible. Then use the Moodle administrator interface for the CodeRunner plug-in to specify the Jobe host name and perhaps port number. Depending on how you've chosen to configure your Jobe server, you may also need to supply an API-Key through the same interface. Assuming you have built Jobe on a separate server, the JobeSandbox fully isolates student code from the Moodle server.

However, Jobe can be installed on the Moodle server itself, rather than on a completely different machine. This works fine, but is much less secure than running Jobe on a completely separate machine. If a student program manages to break out of the sandbox when it's running on a separate machine, the worst it can do is bring the sandbox server down, whereas a security breach on the Moodle server could be used to hack into the Moodle database, which contains student run results and marks.

That said, our Computer Science department used an earlier even less secure Sandbox for some years without any ill effects. Moodle keeps extensive logs of all activities, so a student deliberately breaching security is taking a huge risk. If your Moodle installation includes the phpunit system for testing Moodle modules, you might wish to test the CodeRunner installation.

Most tests require that at least python2 and python3 are installed. You should then initialise the phpunit environment with the commands. You can then run the full CodeRunner test suite with one of the following two commands, depending on which version of phpunit you're using:.

This will almost certainly show lots of skipped or failed tests relating to the various sandboxes and languages that you have not installed, e. These can all be ignored unless you plan to use those capabilities. The name of the failing tests should be sufficient to tell you if you need be at all worried.

Feel free to email the principal developer if you have problems with the installation. Although it's straightforward to write simple questions using the built-in question types, anything more advanced than that requires an understanding of how CodeRunner works. The block diagram below shows the components of CodeRunner and the path taken as a student submission is graded.

Firstly, it is not always necessary to run a different job in the sandbox for each test case. Instead, all tests can often be combined into a single executable program. This is achieved by use of what is known as a "combinator template" rather than the simpler "per-test template" described above. Combinator templates are useful with questions of the write-a-function or write-a-class variety. They are not often used with write-a-program questions, which are usually tested with different standard inputs, so multiple execution runs are required.

Furthermore, even with write-a-function questions that do have a combinator template, CodeRunner will revert to running tests one-at-a-time still using the combinator template if running all tests in the one program gives some form of runtime error, in order that students can be presented with all test results up until the one that failed. Secondly, the above description of the grading process ignores template graders , which do grading as well as testing.

These support more advanced testing strategies, such as running thousands of tests or awarding marks in more complex ways than is possible with the standard option of either "all-or-nothing" marking or linear summation of individual test marks.

A per-test-case template grader can be used to define each row of the result table, or a combinator template grader can be used to defines the entire feedback panel, with or without a result table. See the section on grading templates for more information.

CodeRunner support a wide variety of question types and can easily be extended to support others. A CodeRunner question type is defined by a question prototype , which specifies run time parameters like the execution language and sandbox and also the template that define how a test program is built from the question's test-cases plus the student's submission. The prototype also defines whether the correctness of the student's submission is assessed by use of an EqualityGrader , a NearEqualityGrader or RegexGrader.

The EqualityGrader expects the output from the test execution to exactly match the expected output for the testcase.

The NearEqualityGrader is similar but is case insensitive and tolerates variations in the amount of white space e. The RegexGrader expects a regular expression match instead. The EqualityGrader is recommended for all normal use as it encourages students to get their output exactly correct; they should be able to resubmit almost-right answers for a small penalty, which is generally a better approach than trying to award part marks based on regular expression matches.

Test cases are defined by the question author to check the student's code. Each test case defines a fragment of test code, the standard input to be used when the test program is run and the expected output from that run. The author can also add additional files to the execution environment. The test program is constructed from the test case information plus the student's submission using the template defined by the prototype. The template can be either a per-test template , which defines a different program for each test case or a combinator template , which has the ability to define a program that combines multiple test cases into a single run.

Templates are explained in the Templates section. The C-function question type expects students to submit a C function, plus possible additional support functions, to some specification. As a trivial example, the question might ask "Write a C function with signature int sqr int n that returns the square of its parameter n ".

The author will then provide some test cases of the form. A per-test template for such a question type would then wrap the submission and the test code into a single program like:.

The output from the run would then be compared with the specified expected output and the test case would be marked right or wrong accordingly. That example assumes the use of a per-test template rather than the more complicated combinator template that is actually used by the built-in C function question type. See the section on templates for more. A system administrator can edit those prototypes but this is not recommended as the modified versions will be lost on each upgrade.

New prototype question types can also be created in that category. Editing of prototypes is discussed later in this document. This is the question type discussed in the above example, except that it uses a combinator template. The student supplies just a function plus possible support functions and each test is typically of the form. The template for this question type generates some standard includes, followed by the student code followed by a main function that executes the tests one by one.

However, if any of the test cases have any standard input defined, the template is expanded and executed separately for each test case. The manner in which a C or any other program is executed is not part of the question type definition: it is defined by the particular sandbox to which the execution is passed. The Jobe sandbox uses the gcc compiler with the language set to accept C99 and with both -Wall and -Werror options set on the command line to issue all warnings and reject the code if there are any warnings.

These two very simple question types require the student to supply a complete working program. For each test case the author usually provides stdin and specifies the expected stdout.

The program is compiled and run as-is, and in the default all-or-nothing grading mode, must produce the right output for all test cases to be marked correct. Used for most Python3 questions. For each test case, the student code is run first, followed by the test code. A variant of the python3 question in which the input function is redefined at the start of the program so that the standard input characters that it consumes are echoed to standard output as they are when typed on the keyboard during interactive testing.

A slight downside of this question type compared to the python3 type is that the student code is displaced downwards in the file so that line numbers present in any syntax or runtime error messages do not match those in the student's original code. Used for most Python2 questions. As for python3, the student code is run first, followed by the sequence of tests.

This question type should be considered to be obsolescent due to the widespread move to Python3 through the education community. This is intended for early Java teaching where students are still learning to write individual methods. The student code is a single method, plus possible support methods, that is wrapped in a class together with a static main method containing the supplied tests which will generally call the student's method and print the results.

Here the student writes an entire class or possibly multiple classes in a single file. The test cases are then wrapped in the main method for a separate public test class which is added to the students class and the whole is then executed.

The class the student writes may be either private or public; the template replaces any occurrences of public class in the submission with just class. While students might construct programs that will not be correctly processed by this simplistic substitution, the outcome will simply be that they fail the tests.

They will soon learn to write their classes in the expected manner i. Here the student writes a complete program which is compiled then executed once for each test case to see if it generates the expected output for that test.

The name of the main class, which is needed for naming the source file, is extracted from the submission by a regular expression search for a public class with a public static void main method. This uses the open-source Octave system to process matlab-like student submissions.

As discussed later, this base set of question types can be customised or extended in various ways. The following question types used to exist as built-ins but have now been dropped from the main install as they are intended primarily for University of Canterbury UOC use only. The student submission is first passed through the pylint source code analyser and the submission is rejected if pylint gives any errors.

Otherwise testing proceeds as normal. Obviously, pylint needs to be installed on the sandbox server. This question type takes many different template parameters see the section entitled Template parameters for an explanation of what these are to allow it to be used for a wide range of different problems. For example, it can be configured to require or disallow specific language constructs e. Details on how to use this question type, or any other, can be found by expanding the Question Type Details section in the question editing page.

Used for Matlab function questions. Student code must be a function declaration, which is tested with each testcase. The name is actually a lie, as this question type now uses Octave instead, which is much more efficient and easier for the question author to program within the CodeRunner context.

However, Octave has many subtle differences from Matlab and some problems are inevitable. Caveat emptor. It runs the test code first which usually sets up a context and then runs the student's code, which may or may not generate output dependent on the context. Finally the code in Extra Template Data is run if any.

Octave's disp function is replaced with one that emulates Matlab's more closely, but, as above: caveat emptor. Templates are the key to understanding how a submission is tested. Every question has a template, either imported from the question type or explicitly customised, which defines how the executable program is constructed from the student's answer, the test code and other custom code within the template itself.

A question's template can be either a per-test template or a combinator template. The first one is the simpler; it is applied once for every test in the question to yield an executable program which is sent to the sandbox.

Each such execution defines one row of the result table. Combinator templates, as the name implies, are able to combine multiple test cases into a single execution, provided there is no standard input for any of the test cases. We will discuss the easier per-test template first. The expanded template is then sent to the sandbox where it is compiled if necessary and run with the standard input defined in the testcase.

The output returned from the sandbox is then matched against the expected output for the testcase, where a 'match' is defined by the chosen validator: an exact match, a nearly exact match or a regular-expression match. Expansion of the template is done by the Twig template engine. The template will typically use just the TEST. It is usually a bit of code to be run to test the student's answer.

A typical test i. The result of substituting both the student code and the test code into the template might then be the following program depending on the student's answer, of course :. When authoring a question you can inspect the template for your chosen question type by temporarily checking the 'Customise' checkbox. Additionally, if you check the Template debugging checkbox you will get to see in the output web page each of the complete programs that gets run during a question submission.

The template for a question is by default defined by the code runner question type, which itself is defined by a special "prototype" question, to be explained later.

You can inspect the template of any question by clicking the customise box in the question authoring form. You'll also find a checkbox labelled Is combinator. If this checkbox is checked the template is a combinator template. The actual template used by the built-in C function question type is not actually a per-test template as suggested above, but is the following combinator template.

If a C-function question had two three test cases, the above template might expand to something like the following:. The output from the execution is then the outputs from the three tests separated by a special separator string which can be customised for each question if desired. On receiving the output back from the sandbox, CodeRunner then splits the output using the separator into three separate test outputs, exactly as if a per-test template had been used on each test case separately.

This strategy can't be used with questions that require standard input as there's no reliable way to reset the standard input for each test. The strategy also fails if the student's code causes a premature abort due to a run error, such as a segmentation fault or a CPU time limit exceeded.

As mentioned above, if a question author clicks in the customise checkbox, the question template is made visible and can be edited by the question author to modify the behaviour for that question. The authoring interface allows the author to set the size of the student's answer box, and in a case like the above you'd typically set it to just one or two lines in height and perhaps 30 columns in width.

The above example was chosen to illustrate how template editing works, but it's not a very compelling practical example. It would generally be easier for the author and less confusing for the student if the question were posed as a standard built-in write-a-function question, but using the Preload capability in the question authoring form to pre-load the student answer box with something like.

If you're customising templates, or developing your own question type see later , the combinator template doesn't normally offer sufficient additional benefit to warrant the complexity increase unless you have a large number of testcases or are using a slow-to-launch language like Matlab.

It is recommended that you always start with a per-test template, and move to a combinator template only if you have an obvious performance issue. It may not be obvious from the above that the template mechanism allows for almost any sort of question where the answer can be evaluated by a computer. In all the examples given so far, the student's code is executed as part of the test process but in fact there's no need for this to happen. The student's answer can be treated as data by the template code, which can then execute various tests on that data to determine its correctness.

The Python pylint question type mentioned earlier is a simple example: the template code first writes the student's code to a file and runs pylint on that file before proceeding with any tests. Note that any output written to stderr is interpreted by CodeRunner as a runtime error, which aborts the test sequence, so the student sees the error output only on the first test case.

A Matlab question in which the template code also Matlab breaks down the student's code into functions, checking the length of each to make sure it's not too long, before proceeding with marking. Another advanced Matlab question in which the template code, written in Python runs the student's Matlab code, then runs the sample answer supplied within the question, extracts all the floating point numbers is both, and compares the numbers of equality to some given tolerance.

A Python question where the student's code is actually a compiler for a simple language. The template code runs the student's compiler, passes its output through an assembler that generates a JVM class file, then runs that class with the JVM to check its correctness.

A Python question where the students submission isn't code at all, but is a textual description of a Finite State Automaton for a given transition diagram; the template code evaluates the correctness of the supplied automaton.

The python escaper e 'py' is just one of the available escapers. Others are:. This escapes single quotes, percents and newline characters. It must be used in the context of Matlab's sprintf, e.

These are Twig built-ins. See the Twig documentation for details. It is sometimes necessary to make quite small changes to a template over many different questions. For example, you might want to use the pylint question type given above but change the maximum allowable length of a function in different questions.

Customising the template for each such question has the disadvantage that your derived questions no longer inherit from the original prototype, so that if you wish to alter the prototype you will also need to find and modify all the derived questions, too. In such cases a better approach is to use template parameters, which can be defined by the question author in the "Template params" field of the question editing form. This field must be set to a JSON-encoded record containing definitions of variables that can be used by the template engine to perform local per-question customisation of the template.

Most of these are effectively read only - assigning a new value within the template to the cputimelimitsecs attribute does not alter the actual run time; the assignment statement is being executed in the sandbox after all resource limits have been set.

The question author can however directly alter all the above question attributes directly in the question authoring form. Using just the template mechanism described above it is possible to write almost arbitrarily complex questions. Grading of student submissions can, however, be problematic in some situations.

For example, you may need to ask a question where many different valid program outputs are possible, and the correctness can only be assessed by a special testing program. Or you may wish to subject a student's code to a very large number of tests and award a mark according to how many of the test cases it can handle. The usual exact-match grader cannot handle these situations. For such cases the TemplateGrader option can be selected in the Grader field of the question authoring form.

The template code then has a somewhat different role: the output from running the expanded template program is required to be a JSON string that defines the mark allocated to the student's answer and provides appropriate feedback. A template grader behaves in two very different ways depending on whether the template is a per-test template or a combinator template. When the template is a per-test template and a TemplateGrader is selected, the output from the program must be a JSON string that defines one row of the test results table.

It should usually also contain a 'got' field, which is the value displayed in the 'Got' column of the results table. The other columns of the results table testcode, stdin, expected can, if desired, also be defined by the template grader and will then be used instead of the values from the test case.

As an example, if the output of the program is the string. That field allows the author to define an arbitrary number of arbitrarily named result-table columns and to specify using printf style formatting how the attributes of the grading output object should be formatted into those columns. For more details see the section on result-table customisation. Writing a grading template that executes the student's code is, however, rather difficult as the generated program needs to be robust against errors in the submitted code.

The template-grader should always return a JSON object and should not generate any stderr output. It is recommended that template graders be written in Python, regardless of the language in which the student answer is written, and that Python's subprocess module be used to execute the student code plus whatever test code is required.

This ensures that errors in the syntax or runtime errors in the student code do not break the template program itself, allowing it to output a JSON answer regardless. Some examples of per-test template graders are given in the section Template grader examples. OTOH, there are a lot of others here that could be of better service than I. I don't know what you schedule is but I have three weeks until the first day of class and I would never try to build a Moodle quiz in that time especially my first.

I would suggest that for now, design a paper test then digitize it into Moodle as a first step. Write it up as if you were going to print it out to collect papers to grade.

Instead of printing it, place it as a description or attachment in Moodle. Students will log into Moodle to see the exam and perform the tasks. They will do their work on an editor such as MS Word and submit the document as a file in to the assignment.

It may be a bit challenging to do all on one screen but they can manage it. If it is to be done on a small screen environment that all this will take some rethinking. You should consider if you want them to answer the question or to compile their responses to see if it works.

You can, however, start designing the assessment for next semester and have plenty of time to get it working. Still, there is a learning curve here that may not allow finishing in less than three weeks.

To get to know Moodle Quiz, you might design a simple, playful quiz just to get the idea of how each part or type works. Setting points for responses can be a challenge and verifying your answers may well be too.

Also, you are going to have to teach the students how to take the quiz too. I thought I saw a tutorial on it. It might have been on Youtube. Here is one on YouTube. Here is one that shows how to create the quiz bank. I have not watched any of these so I cannot evaluate them. It may take many views of the set to get to your first token exam. It can be very confusing just to know the vocabulary. Maybe someone will chime on a better prospectus than what I tried here.

Once you have something started, it will be a lot easier to ask questions and for us to help you. Hi CSE. The answer is no. However, Al's suggestion link-again below sounds close to what you may be seeking.

It seems that many people think of helpful Moodlers as mind readers among many other attributes This is surely one case in which we would actually like having the so-much-requested "feature": "how can I grade, know that my students have read or viewed some material? GM I have looked at the documents but they tend to be uninviting especially for the use of light orange text on white. What would it take to have the creators of these policies to at least use a darker orange?

Sounds like something outside the quiz feature-as it stands? I cannot but agree about the newer look both in the editor icons and that of the policies page.

Ironic, isn't it? The former with only light gray icons and the latter with three orange tones. Regarding the use of color in this site, I would definitely differenciate glossary link vs.

To that complain I would also add that the original policy page the previous one had a table of contents that allowed one to emphasize a particular issue by referencing the corresponding link. I would love to know what the infestation is that is driving this effort to make the internet less and less readable.

I can only imagine that it is someone tucked away in a climate and lighting controlled room who is making all these decisions.

It is probably someone with eyesight and has access to a super High-Def monitor and cannot realize that the rest of the world normal people needs to use this stuff. We need to stand up to this.

From here, I would like to quote Churchill: "What kind of a people do they think we are? Do they not realize that we will never cease to persevere against them until they have been taught a lesson which they, and the world, will never forget. For one, all those sites with nano-sized letters that look like the footsteps of ants or those that not only use a black background but that lack any constrast would instantly disappear! I guess the answer would be that they think that we are the ones without power, which given the way things are and go, woud essentially be correct.

I think almost anything like this is likely to require Linux on the server -side, because of the need to launch processed like compilers, and run things in sandboxes. That is just so much easier to do from PHP in Linux. If necessary, could you set up a separate server specifically for these programming tests? You could at least set up a Linux VM and make a tests install to try things out, so you can decide whether it is worth taking things further.

I host my web thought an online web hosting service.. Hello there. If you'd like to talk about installing code runner or compilers, would you like to start a new thread elsewhere please so we can keep this forum for teaching-related queries only? Sorry for the short response but I am in a French campsite with extremely poor wifi and I am having to be economic with my words and time..

Regarding the installation of compilers, that is not a Moodle issue, then you need to do some Duckduckgoing or Googling if you may :. I had a look at it.. My itch is about you addressing people here as Sir-you really do not have to do this.

This is an English Language forum and that means Sir has cultural connotations That said, if you are comfortable with addressing peeps like that-no problem Here in the southern part of the US, calling someone you don't know well sir, ma'am, or miss is just good manners. In fact, I routinely walk up to students that I don't know in the hall and say, "excuse me sir" or "excuse me miss" and then launch into what I need to say.

There is an age range issue with the word "ma'am" however that you should know. It can make some women feel old around here. Its still poor form not to use it though. However-we are not in the Southern part of Georgia or wherever-we are online-metaphorically speaking. I am not saying that is the case here, of course. Skip to main content.

Search Close Search. You are currently using guest access Log in. Search forums Search forums. Display mode Display replies flat, with oldest first Display replies flat, with newest first Display replies in threaded form Display replies in nested form. This discussion has been locked so you can no longer reply to it.

Hello everyone greetings.. Can some one guide me in it?



0コメント

  • 1000 / 1000