XBlock Reporting Tool

Instructors (including CCX coaches) want to know: what answers did my students give to the questions in my course? This applies to multiple choice questions, free text questions, essay questions, polls, and more. Unfortunately there is currently no nice tool for instructors to view student answers.

The status quo is:

  • Insights shows the distribution of student answers for certain problem types (basically multiple choice -like questions only), but it doesn’t say which students chose which answers - it is optimized for huge courses, not small courses.
  • The Instructor dashboard provides a method for downloading a CSV containing student data for any given problem, but the data is difficult to interpret (idiosyncratic JSON format) and is only available for one problem at a time.
  • The Problem Builder XBlock provides a nice reporting tool that provides most of the desired reporting functionality, but it only works with problem builder XBlocks - it doesn’t report on other types of XBlocks. Setup also requires that it be added to a staff-only section of the course, which is a bit unintuitive.
  • The poll XBlock provides built-in reporting and CSV downloading, but again it is only for Poll XBlocks and it uses yet a different interface.

Proposed Solution:

The proposed solution is essentially to take the “Instructor Tool” functionality that exists in Problem Builder, make it a built-in feature by moving it to the Instructor Dashboard, and make it compatible with any type of XBlock (not just Problem Builder).

Screenshot (showing features that exist today):

What this will provide:

  • One consistent tool to use whenever an instructor wants to know what their students have answered for any problems in the course.
  • Works regardless of how each XBlock stores its data, as XBlocks can use a plugin interface to add their reporting capabilities to this tool
  • View a report for the whole course, just a section, or just a specific problem - flexible scope (already implemented and working)
  • Can easily view results online, or download as a CSV report (already implemented and working)
  • Can view answers from a specific students, or from all students (already implemented and working)
  • Will work for CCX coaches to view their students answers in a CCX course (already implemented and working)

Implementation details:

  • Installed XBlocks can indicate that they are “reportable”.
  • The new Reporting Tool would iterate over all installed XBlocks to determine which ones can be reported on
  • When generating a report, an asynchronous celery task will iterate over all selected XBlocks+users and call
    Block.generate_report_data(course_key, block_key, get_block, user_ids, match_string)
  • This must be a static method that can return an arbitrary number of labelled columns to be included in the report (e.g. most will return a “Question” and an “Answer” column).
  • The get_block() method can be used to return a fully-instantiated version of the XBlock within the LMS runtime and as the user in question; however this will be inefficient so should only be used if necessary. In general, the submissions API or the XBlockUserStateClient should be used instead.

This sounds like a great idea. I can see a lot of utility for this in SPOCs.

How would this work with JSInput problems? Is there a way the student state could be stored that might make it easier for this tool to interpret?

1 Like

@Colin_Fredericks The reporting tool could display the raw “State” data for JSInput problems, but it wouldn’t be as nice or as human-readable as the reporting for other problem types, because there’s so much variability in how JSInput problems use the state. So I think it’s a question of making the student state easier for humans to interpret, rather than for this tool to interpret.

This is more or less exactly what we’re after. If those section/questions are the proper titles rather than the “under the floorboards” names, then that will be an important detail.

Thanks @Braden!
your short summary makes a lot of sense
@ Campus we had also started recently to discover and refine our demand for classroom experience for teachers: you can view this doc: https://drive.google.com/open?id=1PvjJ4eUKFd_4eUq1gdU4SFky68HkF183i3uyrciscR0 (which is very very WIP :frowning:)

CC: @eran @anna

also, a question for @Colin_Fredericks (Hi Colin!):
What is the problem you detect with reporting from JSinput ?
To my recollection, as long you write a good content grader for the problem, you can report it back.
Am I missing something? (e.g - you have a problem that the grader outputs: A, B, C, D, and you report it back to the LMS)

If I want just the grade reported, it’s no problem, but if I want more detailed reporting it’s an issue. For instance, with a multiple choice problems it can be useful to see which of the distractors is chosen most often. With numerical problems, it can be useful to see which wrong answers were given, so that the problem can be adjusted or hints can be given. With custom javascript problems, there are so many different ways that things could be done that building a report would necessarily throw out most of the information about what students did in the assignment.

Also: JSInput problems that are really done properly don’t report a grade back. If they did, that would mean the grading is done in javascript, which would mean the grading code is exposed to the students (even if most of them wouldn’t know how to access it). Done right, they report the student’s problem state back to the LMS. For instance, the matching problem type returns an object with a list of lists. The video watch problem returns an object with the video length, the start time, and a list of times. The LMS then uses python to grade that state server-side.

1 Like

As Joe mentioned, this is almost exactly what we’re looking for at the ANU. We have funding that we are planning on contributing to this work.

Shelby at edX asked:

Will this functionality work with courses of any size, or is it limited to a smaller set of learners?

Problem Builder uses the Submissions API to fetch all submissions for a given block, and so the number of inactive learners in a course will not affect performance of the tool.

But to prevent performance issues for large numbers of active learners, we can set a (high, configurable) limit on the number of submissions fetched per block.

Problem Builder also uses the Course Blocks API to fetch the list of blocks to display in the LMS, and this data is fast and cached.

Finally, an asynchronous task generates the report on demand, and so LMS GUI responsiveness is not be affected.

What’s the plan for the data store for this? Reading between the lines, it sounds like you’ll create periodic reports that the instructor dashboard would be displaying?

@pdpinch I’m thinking the reports will be generated on-demand, and stored in the same place all other CSV-based reports are stored. So not really any long-term storage of this report data, though it can be consumed as an API and stored elsewhere by external systems.

BTW after discussing with Shelby at edX, it sounds like we’ll be implementing this as an evolution of the existing “problem response report” download in the instructor dashboard, rather than adding it in as a new separate feature+UI. More details to come soon.

adding @anna to the thread

The solution looks interesting to me. The important feature here is the ability to download the data to CSV. We are now working on trying to use external data visualization tools with edX Data Sources to enrich the user experience ( mostly Instructors ). One tool that I used was tableau and it was pretty easy to connect it to our database and get interesting results quickly. I think we can elaborate the usage of these tools in several scenarios including this one.

@jill_opencraft @yoavca @Braden

We are in the process of implementing this now, and it is built around CSV downloads, yes.

Parts of this functionality have now been merged into edx-platform master and should be available in the upcoming release.

Here are the tickets that are merged:

These PRs are waiting for approval: