Handwriting Recognition and Math Formula

Learnosity Announces Addition of Handwriting Recognition Technology to Math Questions

Translates students’ handwritten input into digital, computer-scored Math equations. Makes it significantly easier for students using touch screen devices such as iPad and Android tablets to enter complex Math formulas.

(NEW YORK, March 2, 2015) Learnosity, a transformative education technology company, today announced the launch of their new suite of technology-enhanced question types enabled with handwriting recognition technology. This allows students to digitally “write” complex equations using operators and integrals without the need for a custom keyboard on screen.

Students can choose between using the customizable onscreen keyboard to input symbols or can just “write” the required symbols and the handwriting recognition technology will translate them into machine-readable, digital information so that their answers can be automatically graded. This gives students all the cognitive benefits of handwriting whilst also allowing them to avail of the powerful auto-grading capabilities of the Learnosity Math engine which evaluates a student’s response for mathematical accuracy rather than just treating it as a string of text. This means that any mathematically correct answer, even if in a different form to the answer entered by the question author, will be accepted.

Traditional keyboards don’t have the majority of symbols required for Math equations readily available to users, so for our Math Formula question types we’ve created customizable onscreen keyboards,” said Learnosity CTO, Mark Lynch. “This is perfect for desktops and laptops, however the increasing prevalence of touch screen devices allows us to further optimize the educational experience by enabling our Math questions with handwriting recognition technology.

Learnosity has partnered with MyScript, the acknowledged market leader in accurate, high-performance handwriting recognition and digital ink management technology in order to offer this new enhancement. As well as recognizing over 200 mathematical symbols and characters the handwriting technology also recognizes geometric shapes and music notation which opens up a wide range of educational applications.

Gavin Cooney, CEO said “While some may think of handwriting as a dying art, emerging research shows that there’s real value in continuing to practice this skill; handwriting increases brain activity and conceptual understanding, hones fine motor skills, and can predict a student’s academic success in ways that keyboarding can’t. This is because handwriting requires a different cognitive process and your brain is forced to do more when you write by hand.”

Watch the video 
Try it yourself

Both Gavin and Mark as well as VP of Business Development Ben Powell and President, Americas Judah Karkowsky will be attending the ATP Innovations in Testing conference in Palm Springs, March 1st to 3rd. Learnosity will be presenting the following sessions during the conference:

  • Monday 11:30 am – 12:30 pm: Locking it Down: The Key to Test Readiness and Security (with Houghton Mifflin Harcourt)
  • Monday 4:00 pm – 5:00 pm: Cloud Based Assessment – A Match Made in Heaven
  • Tuesday 10:30 am – 12:00 am: Don’t Just Check the Box – Authoring TEIs Made Easy (with Houghton Mifflin Harcourt)

More information on each of the individual sessions can be found here.

About Learnosity 
Learnosity is a rapidly expanding educational technology company. The company offers a set of tools and services

that allow clients to incorporate powerful interactive assessment into their digital products quickly and easily. Run by a talented group of people who are passionate about transforming learning and assessment, Learnosity is committed to designing market-leading software to allow developers to create exceptional assessment products, make teachers’ lives easier, and above all, instill a love of learning in students. The Company is seeing an annual doubling of revenues, and works with some of the leading names in the Education industry. The Learnosity Toolkit was recently named the Best K-12 Enterprise Solution by the SIIA, and National Champion for Innovation in the European Business Awards. Learnosity has offices in NYC, Dublin and Sydney. For more information contact Learnosity on +353 (0) 1 4498789, info(at)learnosity(dot)com or visit http://www.learnosity.com.

This release originally appeared on PRWeb at: http://www.prweb.com/releases/2015/02/prweb12546418.htm

Learnosity’s ATP Innovations in Testing 2015 Sessions

atp2015_logoLearnosity is heading back to ATP’s Innovations in Testing conference next week (March 1-4). If you are attending the ATP conference this year be sure to stop by our booth (#107) to meet the team or come to one of our hosted breakout sessions.

Attending this year’s conference will be CEO Gavin Cooney, CTO Mark Lynch, Vice President of Business Development Ben Powell and, the latest addition to the Learnosity team, Judah Karkowsky, President, Americas.

Learnosity is delighted to be joined by Houghton Mifflin Harcourt to discuss some of the most important industry topics; cloud based assessment, test security and authoring Technology-enhanced item types (TEIs).

Full Details of Learnosity’s Innovations in Testing Sessions are as follows:

Locking it Down: The Key to Test Readiness and Security

Monday 11:30 am – 12:30 pm
Mark Lynch (CTO, Learnosity) and Linda Andries (Director, Digital Product Management, Houghton Mifflin Harcourt)

With the move toward online browser-based assessment for high-stakes assessment as well as for formative assessment, exam integrity and test security have become a constant talking point within the online testing industry. Browser-based assessment has obvious advantages in that test takers have immediate access to the assessments, eliminating the need to spend time downloading assessment software. It also means that candidates can use their own devices and do not necessarily need to go to designated testing centers. It does, however, bring its own set of disadvantages. How do you know that the test takers are not finding the answers online? Or that they are not recording the items to distribute to their peers at a later date? What measures can you put in place to ensure the probity of online browser-based assessment? Meeting these challenges requires creativity, technology, and, of course, funding. This session will explore some of the challenges and successes that the presenters and their clients have faced in implementing security measures to ensure the integrity of online assessment. Solutions include: • Secure browsers: Test takers are prevented from accessing other files, websites, and folders on their devices. In order to protect valuable intellectual property and to prevent cheating, they are also typically unable to print, copy, cut, or paste data from their screens. • Detailed event tracking: Every online action that the test taker makes is monitored for the duration of the assessment, and the system administrator is alerted if the test taker is outside of accepted behavior patterns. • Test windows: The assessment may only be taken during a set time. • Student verification process and pacing: Test administrators issue unique codes to individual test takers and have the ability to monitor the test takers’ progress in real time. They can also start, save, quit, pause, and allocate extra time— all at the click of a button.

Cloud Based Assessment – A Match Made in Heaven

Monday 4:00 pm – 5:00 pm
Gavin Cooney (CEO, Learnosity) and Mark Lynch (CTO, Learnosity)

The increasing prevalence of cloud-based assessment can be taken as a validation of this assessment delivery option. However despite its increasing popularity, it is still a relatively new concept, and it is viewed as uncharted waters in many respects. The goal of this session is to better prepare those considering implementing a cloud-based assessment strategy by sharing some of the presenters’ experiences gained in delivering cloud-based assessment to millions of students across the USA. As well as discussing the general technical, business, and strategic advantages of leveraging the power of the cloud, the presenters will also discuss specific challenges that they have faced when delivering cloud-based assessments and the solutions implemented to overcome those challenges. The session will specifically focus on: • The improved testing experience for test takers (due to anytime, anywhere access) • The administrative and reporting benefits that real-time test progress tracking can offer • Ease of integration as compared with other assessment delivery methods • Perceived barriers to using cloud-based assessment, such as test security and academic validity • Technical challenges such as archaic technical infrastructure, limited bandwidth, and firewall and proxy restrictions • The cost efficiencies offered by the ability to dynamically scale to cope with fluctuating demand • The business benefits and challenges of having a constantly evolving product • The change from a traditional pricing and distribution model to a subscription-based model In all of the above cases, the presenters will discuss cloud-based assessment in the specific context of the Common Core state standards and how it may help states, testing companies, and educational publishers to adapt. By the end of the session, attendees will have a clear picture of the current challenges and opportunities faced by cloud-based assessment as well as some insight into what is coming down the line in both the short and medium term.

Don’t Just Check the Box – Authoring TEIs Made Easy

Tuesday 10:30 am – 12:00 am
Gavin Cooney (CEO, Learnosity) and Jennifer Lawrence (Manager, Development Systems, Houghton Mifflin Harcourt)

The Race to the Top Assessment Program and the introduction of the Common Core State Standards has resulted in an increased demand for new technologies and features in assessments. One of the most significant implications for online assessment has been the debut of technology-enhanced item types. (TEIs). TEIs are question types that go beyond the traditional selected-response or constructed-response collection methods and instead require specialized interactions from test takers (e.g., ordering a group of elements chronologically by dragging and dropping, picking out key words in a paragraph by highlighting, and manipulating graphs and charts so that they match a stated function). While the use of TEIs have many benefits, there tends to be a higher cost involved in creating TEIs than in creating traditional questions. This is generally because a significant degree of technical expertise is required to create these more advanced question types. This session will examine how to lower the bar for assessment authors— moving from a developer to subject matter expert with little or no technical training. Presenters will also discuss some of the common challenges that they have encountered when creating interactive, online assessments: • Requirements that online tests be the same as the paper and pencil tests and associated tradeoffs • Available technology features driving item design • Creation of new learning scenarios with poorly designed TEIs • Metadata and interoperability requirements This will be a fun, informative, and interactive session in which attendees will see how easy it can be to create complex TEIs from scratch. Attendees will be able to create, review, and publish directly from one authoring environment—no need to create on paper or in Word or Excel. No prior experience is required, but attendees should bring a laptop if they want to get involved.

A lightweight geometry library

A couple of months back, we released a beta version of the question type Image Upload. It allows authors and students to upload an image and students to add annotations to mark their responses. For maximum flexibility, we needed a tool that authors can use to create various geometrical shapes to mark the response areas, i.e. the areas where students are expected to add their responses. For example, a question could be “On the map of Australia, please mark NSW”, and the author will need to be able to mark around the NSW border.

Image 1:NSWSelected

If a student adds an annotation within the response area, the answer will be marked as correct (see image 2 for a correct answer).

Image 2:CorrectAnswer

From a technical perspective, the drawing of the response area is implemented via the jsgl vector graphics library (www.jsgl.org). The image is added as background to a jsgl panel, and the polyline and polygon are rendered through jsgl’s polyline and polygon interfaces.

In order to validate a student’s response, we needed an algorithm to determine if the response lies within the specified response area, i.e. an algorithm to solve the ‘point in polygon’ problem. After researching a couple of freely available JavaScript geometry libraries, which offer implementations for the ‘point in polygon’ problem, we decided to implement the required algorithms ourselves. This is because we wanted to have a very lightweight library and also because some of the available implementations do not handle specific edge cases very well.

There are two main algorithms we needed to implement in our geometry library: 

Line-Line intersection

A line segment in our Geometry Library is given by a starting point P1 and an end point P2. Both points consist of an x and y coordinate. The standard line equation in variables x and y

Ax + By = C

can be derived from these points by setting:

A = P2.y - P1.y
B = P1.x – P2.x
C = A * P1.x + B * P1.y

If two line equations are given in the form

(1) A1x + B1y = C1
(2) A2x + B2y = C2

we can find their point of intersection by solving for the two unknowns x and y and then checking if the resulting point actually lies on both line segments. Solving for x and y can be done by multiplying equation (1) with B2 and equation (2) with B1:

(3) A1B2x + B1B2y = B2C1
(4) A2B1x + B1B2y = B1C2

and then subtracting equation (4) from (3):

A1B2x - A2B1x = x * (A1B2 - A2B1) = B2C1 - B1C2

If the value of (A1B2 – A2B1) is equal to 0, it means that the two lines are parallel and the algorithm needs to check if the line segments overlap anywhere. Otherwise, the value for x can be calculated by dividing both sides of the equation by (A1B2 – A2B1). In a similar manner, the value for y can be derived.

In JavaScript the implementation of the algorithm is as follows:  

* This function determines if two line segments (specified by their start
* and end points) intersect
* The algorithm simply solves the equation system for the two line equations:
* delta1_y * x + delta1_x * y = constant1 
* delta2_y * x + delta2_x * y = constant2
* where delta1_y = a2.y - a1.y, delta1_x = a1.x - a2x, constant1 = delta1_y * a1.x
* + delta1_x * a1.y
* (similar for the second line)
* For more information on the algorithm research Line-Line intersection algorithms
* @param a1 start point of line 1, given by an object with an x and y float
* @param a2 end point of line 1, given by an object with an x and y float
* @param b1 start point of line 2, given by an object with an x and y float
* @param b2 end point of line 2, given by an object with an x and y float
* @returns true if the lines intersect or if they have the same slope and
* some points in common
* false otherwise
intersectLineLine: function(a1, a2, b1, b2) {
  // allow for a certain tolerance as float operations
  // are not exact
  var delta1_y = a2.y - a1.y;
  var delta1_x = a1.x - a2.x;
  var constant1 = (delta1_y * a1.x) + (delta1_x * a1.y);
  var delta2_y = b2.y - b1.y;
  var delta2_x = b1.x - b2.x;
  var constant2 = (delta2_y * b1.x) + (delta2_x * b1.y);
  var determinant = (delta1_y * delta2_x) - (delta1_x * delta2_y); 
  var intersect_x;
  var intersect_y;
  var max_x_a = Math.max(a1.x, a2.x);
  var min_x_a = Math.min(a1.x, a2.x);
  var max_y_a = Math.max(a1.y, a2.y);
  var min_y_a = Math.min(a1.y, a2.y);
  var max_x_b = Math.max(b1.x, b2.x);
  var min_x_b = Math.min(b1.x, b2.x);
  var max_y_b = Math.max(b1.y, b2.y);
  var min_y_b = Math.min(b1.y, b2.y);

  if (Math.abs(determinant) < this.tolerance) {
    // Lines are parallel. Do they have a segment in common?
    var sameLine = false;
    if (delta1_x !== 0 && delta2_x !== 0) {
      var ya_atZero = constant1 / delta1_x;
      var yb_atZero = constant2 / delta2_x;
      if (Math.abs(ya_atZero - yb_atZero) < this.tolerance) {
        sameLine = true;
    } else {
      var xa_atZero = constant1 / delta1_y;
      var xb_atZero = constant2 / delta2_y;
      if (Math.abs(xa_atZero - xb_atZero) < this.tolerance) {
        sameLine = true;

    if (sameLine) {
      // segments lie on the same line. Do they have an overlap on the x axis
      if ((Math.abs(max_x_b - min_x_a) < this.tolerance || max_x_b > min_x_a) &&
        (Math.abs(max_y_b - min_y_a) < this.tolerance || max_y_b > min_y_a)) {
        return true;
    return false;
  } else {
    intersect_x = (delta2_x * constant1 - delta1_x * constant2) / determinant;
    intersect_y = (delta1_y * constant2 - delta2_y * constant1) / determinant;

    // Check if the point lies on both lines, allowing for tolerance
    if ((Math.abs(intersect_x - min_x_a) < this.tolerance || intersect_x > min_x_a) &&
      (Math.abs(intersect_x - max_x_a) < this.tolerance || intersect_x < max_x_a) &&
      (Math.abs(intersect_y - min_y_a) < this.tolerance || intersect_y > min_y_a) &&
      (Math.abs(intersect_y - max_y_a) < this.tolerance || intersect_y < max_y_a) &&
      (Math.abs(intersect_x - min_x_b) < this.tolerance || intersect_x > min_x_b) &&
      (Math.abs(intersect_x - max_x_b) < this.tolerance || intersect_x < max_x_b) &&
      (Math.abs(intersect_y - min_y_b) < this.tolerance || intersect_y > min_y_b) &&
      (Math.abs(intersect_y - max_y_b) < this.tolerance || intersect_y < max_y_b)) {
        return true;
    return false;


Point in Polygon

The algorithm to determine if a point is inside of a polygon is based on the observation that if a point moves along a ray from the probe point to infinity, and if it crosses the boundary of a polygon (possibly several times), then it alternately goes from the outside to the inside, then from the inside to the outside, etc. (This observation may be mathematically proved using the Jordan curve theorem.) So we can conclude that the probe point is inside the polygon, if the ray to infinity intersects an odd number of polygon edges (see image 3 for illustration), and outside the polygon, if it intersects an even number of polygon edges (see image 4 for illustration).

Image 3: if a point is inside the polygon, the ray to infinity intersects an odd number of edges (3 in this case)PointInsidePolygon

Image 4: if a point is outside the polygon, the ray to infinity intersects an even number of edges (4 in this case)PointOutsidePolygon

There are a couple of edge cases that have to be considered to make the algorithm robust. The first edge case is that the probe point lies on one of the polygon’s edges. This can be tested by applying the ‘point on line’ algorithm to each of the polygon’s edges.

The second edge case is that the ray to infinity intersects the polygon exactly in one or more of its vertices.

Image 5: edge case 2, ray intersecting two verticesRayOnVertex

In order to handle this edge case, two scenarios have to be differentiated. The first scenario involves the edges of the intersected vertex pointing in the same direction (as seen from the ray to infinity). In this case, the two vertices do not need to be counted – if the ray to infinity was outside the polygon before hitting the vertex, it will be outside afterwards. If it was inside the polygon before hitting the vertex, it will still be inside after hitting the vertex. In the second scenario, the edges of the intersected vertex point in different directions and the ray will move from the inside to the outside, or from the outside to the inside of the polygon. This is seen in image 5 in the second intersection. In this case, we have to add 1 to the intersection count.

The last edge case is that the ray to infinity overlaps one or more edges of the polygon:

Image 6: Ray overlapping with several of the polygon’s edgesRayOverlap

The overlapping edges should not be considered at all. Instead, one can move the point on the ray forward to the next edge which is not parallel to the ray and apply the logic for edge case two explained above. For example in Image 6, the first intersection should not be counted, because the edge before the overlap points up and the edge after the overlap points up as well. In the second intersection however, the edges point up and then down, so the intersection count needs to be incremented.

If these edge cases are taken into consideration, the point-in-polygon algorithm can be easily implemented based on the line-line-intersection algorithm. In our implementation, the code looks like this:  

* Checks if a point is on a line
* @param point object with an x and y attributes
* @param a1 start point of the line
* @param a2 end point of the line
* @returns truf if the point is on the line, false otherwise
pointOnLine: function (point, a1, a2) {
  var max_x_a = Math.max(a1.x, a2.x);
  var min_x_a = Math.min(a1.x, a2.x);
  var max_y_a = Math.max(a1.y, a2.y);
  var min_y_a = Math.min(a1.y, a2.y);

  if (point.x < min_x_a ||
    point.x > max_x_a ||
    point.y < min_y_a ||
    point.y > max_y_a) {
      return false;

  var delta_y = a2.y - a1.y;
  var delta_x = a1.x - a2.x;
  var constant1 = (delta_y * a1.x) + (delta_x * a1.y);
  var constant2 = (delta_y * point.x) + (delta_x * point.y);
  if (Math.abs(constant2 - constant1) < this.tolerance) {
    return true;
  return false;

* Checks if a point is within a given polygon
* The idea is that a point is within a polygon if and only if a horizontal line
* from the point towards positive infinity intersects an odd number of polygon edges
* We need to be careful if we hit a vertex. Then the following needs to be done:
* Suppose we draw a horizontal line going through the point and follow
* that line to positive infinity. If we hit a vertex and both edges of the
* vertex lie to our right or our left, then both edges can be counted.
* If one edge lies to the right and one lies to the left of the point,
* only one (let's say the right one) should be counted
* Last complication is, if one of the edges of the vertex is also horizontal.
* In that case we need to go forward to the next point where the edge is not horizontal
* @param point object with x and y coordinate
* @param polygon array of points defining the polygon
* @returns true if point is in polygon, false otherwise
pointInPolygon: function(point, polygon) {
  var maxX = Number.MIN_VALUE;
  var maxY = Number.MIN_VALUE;
  var minX = Number.MAX_VALUE;
  var minY = Number.MAX_VALUE;
  var numberOfPoints = polygon.length;
  var i = 0;
  var currentPoint;
  var pointAfter;
  var pointBefore;
  var counter = 0;
  var countDown;

  // Sanity check that point's x and y coordinates are
  // within the range of the polygon and that the point
  // is not a vertex or lies on an edge
  for (i; i < numberOfPoints; i++) {
    currentPoint = polygon[i];
    if (currentPoint.x == point.x && currentPoint.y == point.y) {
      return true;
    pointAfter = polygon[(i + 1) % numberOfPoints];
    if (this.pointOnLine(point, currentPoint, pointAfter)) {
      return true;
    maxX = Math.max(maxX, currentPoint.x);
    maxY = Math.max(maxY, currentPoint.y);
    minX = Math.min(minX, currentPoint.x);
    minY = Math.min(minY, currentPoint.y);
  if (point.x < minX || point.x > maxX || point.y < minY || point.y > maxY) {
    return false;

  // Now the actual algorithm
  for (i = 0; i < numberOfPoints; i++) {
    currentPoint = polygon[i];
    pointAfter = polygon[(i + 1) % numberOfPoints];
    if (this.intersectLineLine(point, {x: maxX + 1, y: point.y},
        pointAfter)) {

      // Let's check if the current vertex is intersected
      if (Math.abs(point.y - currentPoint.y) < this.tolerance) {

        // Check if this edge is horizontal
        if (Math.abs(point.y - pointAfter.y) < this.tolerance) {

        // Do the current edge and the last non-horizontal edge
        // point in in the same direction?
        countDown = 1;
        pointBefore = polygon[(i - countDown) % numberOfPoints];
        while (Math.abs(pointBefore.y - currentPoint.y) < this.tolerance) {
          var index = (i - countDown) % numberOfPoints;
          if (index < 0) {
            index = index + numberOfPoints;
          pointBefore = polygon[index];
        if ((pointBefore.y > currentPoint.y && pointAfter.y > currentPoint.y) ||
            (pointBefore.y < currentPoint.y && pointAfter.y < currentPoint.y)) {
          counter = counter + 2;
        if (pointAfter.y < currentPoint.y) {

      // If the vertex after is intersected, it will be dealt with in
      // the next iteration
      if (point.y == pointAfter.y) {
      counter ++;
  return (counter % 2 == 1);

If you find any errors in our implementation, please point them out to us. Otherwise, we hope that the description of our implementation was interesting and possibly useful for you.

Vertical Numberline Plot

We have extended the functionality of our Numberline Plot Question Type by adding the ability to plot on a vertical number line.

Change between the default horizontal and new vertical number line by selecting the appropriate layout in the Formatting section of the Author Site.

Building Educational Experiences with Learnosity

Guest Post: Brad Hunt, VP Business Development, Smooth Fusion


One of our long-term clients, CEV Multimedia, Ltd., has been providing innovative educational materials for the last 30 years. CEV specializes in providing quality curriculum and educational resources for the subject areas of Agricultural Science & Technology, Business & Marketing, Family & Consumer Sciences, Trade & Industry, and Career Orientation. CEV’s teaching materials were originally distributed on VHS tapes, followed by CD and DVD, and now are delivered through the web.

Over the last few years, we have worked with the team at CEV to create an online learning platform known as iCEV. This online learning platform makes it easy for teachers and students to find and view educational videos, download worksheets and assignments, and find other quality content in several subject areas.

In the summer of 2014, CEV wanted to add interactive assignments, quizzes and tests to the existing platform. Instead of trying to reinvent these features and functionality from scratch, CEV partnered with Learnosity. Learnosity provides a set of tools and services that power the next generation of assessment products. Offered through a software-as-a-service model, Learnosity provides modular and flexible learning tools that can be integrated into any existing site.

With a powerful authoring system, Learnosity allows educators to create 52 different question types including multiple choice, sort lists, match lists, draw-on-an-image questions, cloze questions, number lines, and math essay. For CEV’s content areas, we used mostly multiple choice and order lists, but CEV continues to create new innovative questions as Learnosity adds question types.


This figure shows the Learnosity authoring platform that is used to create the questions.

Once the questions have been authored, there are several ways in which they can be grouped together to form activities and assessments. These activities can then be embedded on your own site. For CEV, we used Learnosity’s powerful API to embed activities into the iCEV product directly so that users don’t ever leave the iCEV platform. 

The figure above shows an interactive question in action on the iCEV platform. Students drag and drop the vocabulary term to match the definition.

The figure above shows an interactive question in action on the iCEV platform. Students drag and drop the vocabulary term to match the definition.

Another valuable Learnosity feature is the reporting system. And through the API, you can embed reports right on your own site. For example, when students complete an activity on the iCEV platform, they are presented with a summary of their work.


The flexibility of the Learnosity platform was one of the reasons that it was selected by CEV. A good example of this is Learnosity’s user management features. Through the API, we were able to programmatically create users as they begin to take the assessments, and then Learnosity tracks all the student progress on various attempts at activities. This meant that we could keep the user management system we already had in place for iCEV’s users. They did not need a second login for Learnosity tools. This also allows us to track user progress to provide grade reports for students and teachers.


Lastly, as we progressed through the Learnosity integration, the Learnosity team was there to help. On a few occasions, we had questions about the API or how to accomplish something specific with it, and the staff at Learnosity were always willing to support our development via email or phone calls. They were great to work with.

If you are working on projects that require the use of some type of assessment engine, consider Learnosity. It is a robust, flexible platform that has worked well for iCEV and was easy for Smooth Fusion to integrate.

This post originally featured on the Smooth Fusion Blog.



Expanding trust in educational technology by pledging to safeguard student personal information.

Dec 9th (New York, NY) Learnosity, a transformative education technology company, supplying Software as a Service (SaaS) assessment to many of the world’s leading school service providers today announced that it has joined the Student Privacy Pledge created by the Future of Privacy Forum (FPF) and the Software & Information Industry Association (SIIA).

“We take security and student data protection extremely seriously at Learnosity and use advanced security features such as encrypted volumes and rotating authentication keys to help keep student data secure.” says Learnosity CTO Mark Lynch. “We are 100% committed to any endeavours that safeguard student privacy and are delighted to publicly affirm our commitment to responsible data practices by signing the Student Privacy Pledge

The Pledge details ongoing industry practices that meet and go beyond all federal requirements and to encourage service providers to more clearly articulate these practices to further ensure confidence in how they handle student data.

By signing the Pledge, Learnosity joins major ed tech companies including: Amplify, Atomic Learning, Clever, Code.org, DreamBox Learning, Edmodo, Follett, Gaggle, Houghton Mifflin Harcourt, Knewton, Knovation, Lifetouch, Microsoft, Renaissance Learning, Think Through Math and Triumph Learning and publicly confirms that the company will:

  • Not sell student information
  • Not behaviorally target advertising
  • Use data for authorized education purposes only
  • Not change privacy policies without notice and choice
  • Enforce strict limits on data retention
  • Support parental access to, and correction of errors in, their children’s  information
  • Provide comprehensive security standards
  • Be transparent about collection and use of data

The Pledge and more information about how to support it are available at http://studentprivacypledge.org/.

About Learnosity

Learnosity is a rapidly expanding educational technology company. The company offers a set of tools and services that allow clients to incorporate powerful interactive assessment into their digital products quickly and easily. Run by a talented group of people who are passionate about transforming learning and assessment, Learnosity is committed to designing market-leading software to allow developers to create exceptional assessment products, make teachers’ lives easier, and above all, instill a love of learning in students. The Company is seeing an annual doubling of revenues, and works with some of the leading names in the Education industry.  The Learnosity Toolkit was recently named the Best K-12 Enterprise Solution by the SIIA, and National Champion for Innovation in the European Business Awards.  Learnosity has offices in NYC, Dublin and Sydney. For more information contact Learnosity on +353 (0) 1 4498789, info@learnosity.com or visit www.learnosity.com.

Bringing the Learnosity Audio Question To Devices

The Audio Question has been a key vertebra in Learnosity’s backbone for quite some time. Built with a clever mixture of Flash and JavaScript, it has more than carried its own weight from amongst the repertoire of the Learnosity Questions API.

As is the case with anything built with Flash, though, its lack of open standard has implications for its adoption on newer and more mobile platforms, most of which have seen a demand and subsequent push for open web standards.

Rather than forever maintaining the DIY aspect of specialisation through the use of plugins and special configurations, this push has seen the adoption of amazingly practical audiovisual APIs for mobile web, namely, the slew of WebRTC APIs and Web Audio API.

The question: With these emerging technologies for mobile, can we bring the Learnosity Audio Question to devices?

A Short History of Exploration

At Learnosity, we like to keep up with emerging technology and adapt accordingly. As such, investigations on making the audio question more portable started as early as 2013, when the WebRTC and Web Audio APIs became available for Chrome for Android. One of our hack day teams tinkered with the technologies as they emerged, and, while noticable teething problems put the proverbial pin in things, the positive undertone was that there was definitely potential.

It wasn’t until early 2014 that stable and user friendly support for the Web Audio API came to Chrome for Android. MediaStream API support for mobile WebRTC had hit the ground running not long after our initial experiments, but now the latest inclusion of the AudioContext from the Web Audio API was the next runner in the relay.

What this meant for a more portable Audio Question was:

  • The browser itself had access to the audio stream coming from an end users recording hardware thanks to the MediaStream API.
  • We could read in that audio stream to an accessible audio context.
  • Most importantly, we could access buffered chunks of that stream for the sake of persistence.

Recording – A Workflow

Working draft specifications and naming polyfills aside, the recording workflow itself is rather straightforward. The Web Audio API comes equipped with more tools than required to just get the job done. That being said, the bulk of the work came about while dealing with the ‘newness’ of having these tools available for mobile browsers – buffer sizes and memory management being of prime concern in a context of being as lightweight as possible.

The flow itself works as follows:

Audio Question workflow with Web Audio API

Audio Question workflow with Web Audio API

From the MediaStream API, we have access to a communications stream – the MediaStreamAudioSourceNode (dependent on the end user, this is typically from a microphone).
This is an AudioNode that acts as an audio source for the Web Audio API to work on.

We connect our audio source to an AnalyserNode. This allows us to have access to frequency and time domain analysis in real-time for the sake of levels monitoring.

This gets passed to a JavaScript processing node, which is the crux of accessing the audio itself for persistence. This pipes the audio buffer out of the AudioContext thread and into the main JavaScript thread so we can (as the name suggests) process it. At this point, we adjust the audio sample rate and encoding for transport and persistence – similarly, we need to have a copy stored in memory ready for playback.

Finally, the whole process chain is connected to AudioDestinationNode, which, is effectively the speakers of the end user.

(A pre-recording tone is supplied by an OscillatorNode, which outputs a computer generated sine wave, and we control the output drop-off with a GainNode – to prevent speakers from giving a hardware crackle due to a lone burst of sound).

Playback – A Workflow

The playback workflow provided more than its fair share of “what?” moments while putting it together. It needs to be understood that the Web Audio API wasn’t intended to be an out of the box media player – there’s other tools that fill that niche already, though, those other tools didn’t anticipate playing back raw audio fresh off the stream.
The Web Audio API was designed around the idea of video game audio engines and audio production applications, and as such, a lot of the tooling revolves around “one shot” playback – you don’t scrub or seek on an audio blip that lasts less than a second. Similarly, its purposing from the WebRTC specification sees it being hooked up to a live stream and playing that until it stops – not altogether different.

Playback via the Web Audio API

Playback via the Web Audio API

In our playback workflow, our AudioBufferSourceNode is created from the stream we’ve been capturing via our recording workflow. In essence, this is raw audio data that has turned up to the party wearing a “Hi, my name is WAV” name tag, and manages to mingle as such.

Through our familiar chain of a GainNode (for volume) and an AnalyserNode (for levels), we again reach the AudioDestinationNode (hopefully speakers).

However, due to the one-shot nature of the AudioBufferSourceNode, any pause or seeking operation done on the audio will see its destruction, and a new Node taking its place as if nothing has happened. Hilariously, the original has no idea at what point it stopped, it just knows that it did, and as such, playback timing needs to be an external operation.

Conclusion – a solution?

The current incarnation of our efforts is the WebRTC-audio question. Currently in beta, and functioning admirably on the latest versions of Chrome and Firefox for Android.


As the MediaSteam API specification is still in working draft, and the Web Audio API specification is still subject to change (for the better, no doubt), this beta flag is unlikely to be lifted in the near future.

The Future

Readers who themselves have experimented in this area will know all too well the pain of having to pipe the audio stream buffer into the main JavaScript thread. Thread jumping from the relatively safe audio stream thread has the potential to introduce latency and all manner of audio glitches.

Thankfully, the ever accomplishing API specifications have seen to it that we’ll be getting Audio Workers at some point along our journey – let’s hope it’s not too far off.


Formula Input Feature

You can now embed our Formula Keyboard into multiple areas within a page even if you are not using the Formula Question Type.

Formula Input now sits alongside our range of Feature Widgets in the Features section of the Author Site.

Formula InputUsers can choose the symbol groups to be displayed and can control other features such as the hints, if an initial value is to be displayed, the keyboard UI style and response container size.

Check out the Author Guide Docs for more information on this new feature.


Creating Percentage Bar question type

At Learnosity, we have some pretty creative clients. The need for them to be able to create their own question types is increasing. We want to give third party developers the ability to create their own custom question while utilising the power of the Learnosity Platform for Authoring, Assessment and Reporting. This is exactly what the recently released Custom question is for. With the Custom question, developers have full control over the look and feel of the question, the user interaction and the scoring of the responses.

The Custom question is designed to be very simple to extend. To test drive it, I was asked to follow our knowledge base article creating a percentage bar question, where the student is able to drag the slider on the bar to change the input value.


View Demo

A Custom question is defined by passing a JSON object to the Questions API, just like any other other question type. I figured I could use HTML5′s range input to do this, and style its Shadow DOM using CSS to get the user interface I wanted. In the JSON object, I passed in my custom attributes:

  • js – URL to the question’s JavaScript file
  • css – URL to the question’s CSS file
  • prepend_unit – unit that gets prepended to the value
  • append_unit – unit that gets appended to the value
  • step – the value between incremental points along the slider range
  • min_value – minimum value
  • max_value – maximum value
  • min_percentage – minimum percentage
  • max_percentage – maximum percentage
  • init_value – value of the question when it’s first loaded
  • color – color of the bar
  • valid_response – valid response
  • score – the score students get if the response is valid
    "response_id": "custom-range-slider-response-1",
    "type": "custom",
    "js": "//docs.learnosity.com/demos/products/questionsapi/questiontypes/custom_percentagebar.js",
    "css": "//docs.learnosity.com/demos/products/questionsapi/questiontypes/custom_percentagebar.css",
    "prepend_unit": "$",
    "append_unit": "",
    "min_value": "0",
    "max_value": "150",
    "step": "10",
    "min_percentage": 0,
    "max_percentage": 100,
    "init_value": "20",
    "bar_color": "#80D3B4",
    "valid_response": "120",
    "score": 1,
    "stimulus": "If Luke has $150 and he spends $30 on beer, how much money has he got left?" 

I passed in the stimulus attribute as well, which is available in all question types.

After that, I created a JavaScript module with the init function of the question, where I specified the markup of the question to be rendered. Once this is completed, I used the ready event to notify Questions API that my question was ready:

var PercentageBar = function (options) {
    var template = '//My custom question HTML';


    this.$el = options.$el;



In order for responses to be saved, I needed to let Questions API know when the response to the question had changed. To do this, I created a listener for the input event. For it to work in IE10, I needed a listener for the change event as well. When the events were triggered, I could call the changed event and pass in the current input value:

$bar.on('input change', function() {
    options.events.trigger('changed', $response.val());

Next I had to implement the scoring function. I simply needed to define a scorer function and pass in the question object and response data:

function PercentageBarScorer(question, response) {
    this.question = question;
    this.response = response;

Then following the structure of the scoring interface as specified in the knowledge base article, I defined methods to determine if the response was correct, what was the score for the current response and the maximum score of the question.

function PercentageBarScorer(question, response) {
    this.question = question;
    this.response = response;

PercentageBarScorer.prototype.isValid = function() {
    return this.response === this.question.valid_response;

PercentageBarScorer.prototype.score = function() {
    return this.isValid() ? this.maxScore() : 0;

PercentageBarScorer.prototype.maxScore = function () {
    return this.question.score || 1;

When the validation button was clicked, it triggered Questions API’s public method validateQuestions. In my JavaScript module, I listened for the validate event which would trigger the validate function I created:

var scorer = new PercentageBarScorer(options.question, $response.val());
function validate() {
    if (self.scorer.updateResponse($bar.val()).isValid()) {
    } else {

options.events.on('validate', function() {

The last step in creating the JavaScript module was to return the its object containing Question and Scorer properties:

return {
    Question: PercentageBar,
    Scorer:   PercentageBarScorer

Embedding the question on the page was just like any of our existing core question types. I followed the guide on Questions API documentation and voilà! We have a new percentage bar custom question! Since I was utilising HTML5 range input and Shadow DOM, this will only work in latest Chrome, Firefox, Safari and IE10+.

All this from start to finish took less than 2 days to implement and we think it will open up a lot of doors for developers out there to do some pretty cool things with our APIs.

View Demo