For better understanding, it is useful to clarify some of the terms used below. Some of them may have been unclear or confusing when used in the past. The definitions and examples are not entirely “dictionary” or “textbook” versions and are somewhat simplified for ease of use:
Distance learning
Definition
Instruction carried out remotely, usually online, without physical face-to-face contact.
Example
Creating a course in Moodle and checking its attendance and fulfilment on a regular basis.
An online lecture in real time.
Assigning and submitting assignments via Moodle.
Synchronous online instruction
Definition
Distance learning, where participants are connected at the same time; they are “synchronized” and communicate directly with each other. It is possible to search for and use digital equivalents of common pedagogical procedures, and at least in some fields, to come close to in-person teaching.
Example
Lectures, seminars, exercises, etc., which take place at the originally scheduled time. Participants can hear, and ideally, even see what’s going on. There can be a lively discussion with immediate feedback.
Asynchronous online instruction
Definition
Distance learning that participants follow or react to at a later time. However, not everyone is connected at the same time according to the schedule.
Example
Instructors record a commentary to the lecture which they post on a streaming medium and provide a link on Moodle. Students participate according to their time schedules and go through the materials by themselves.
Sylabus
Definition
A summary of what is to take place in a course or subject from the instructor’s point of view.
Example
The structure and function of the gums and teeth. Parts of the tooth. Tooth tissues (enamel, dentin, cementum, pulp) and their interfaces.
Learning objectives
Definition
What we want students to learn in our course.
Example
Students should be able to describe the structure of hard tissues and soft tissues of the tooth, including the differences between individual teeth. They should be able to recognize tissues under a microscope and discuss their function. They can anticipate the consequences of disorders of the normal tooth structures.
Learning outcomes
Definition
What do students who pass our course know? What tasks and problems can they solve? How can they practically use what they have learned?
This is a list of verifiable, achievable, measurable unambiguous, public, and comprehensible assignments that we are satisfied with after completion. We use them for verifying, testing, and evaluating, regardless of who the examiner is.
They are the basis of accreditation and the international comparability of the course and a measure of its prestige and complexity. Guarantors of related courses can map out the competencies of their graduates from the learning outcomes. For students, they are a guide to responsible and practical preparation for an exam and make it easier for them to comprehend what is really important in the large quantities of materials. The good news – experienced instructors have learning outcomes “in their heads”, and these are often refined based on testing; providing this in written form is sufficient, if this has not already been done.
Example
- Define and use the following terms in an appropriate context: gingiva, marginal gingiva, dentogingival junction, hemidesmosome, gingival sulcus, alveolar gingiva, gingival (interdental) papilla; tooth, crown, neck, root, medullary cavity, root canal, apical opening of the tooth, enamel, enamel prisms, ameloblast, hydroxyapatite crystals, Hunter-Schreger bands, incremental (Retzius) bands of enamel, connection between enamel and dentin, dentin, dentinal canals, odontoblast protrusions (Tomes fibres), peritubular dentin, intertubular dentin, dentin incremental lines, predentin, primary dentin, secondary dentin, tertiary dentin, acellular and cellular cementum, cementoblast, cementocyte, pulp, odontoblast, dentition, periodontal ligaments, dentoalveolar junction, alveolar periosteum;
- Draw and describe the structure of the tooth according to the actual specimen under the microscope.
- Compare the structure and occurrence of predentin, primary dentin, secondary dentin, and tertiary dentin.
- Discuss the effect of dentogingival junction and the depth of gingival sulcus on the periodontal function and condition.
- Estimate the effect of reduced salivation on the condition of the oral mucosa and teeth.
- Estimate the effects of a vitamin C deficiency on periodontal ligaments.
Bloom’s taxonomy
Definition
A multi-layered hierarchy of learning objectives and skills. It answers the question of how to organize learning outcomes and how to choose the relationships between different types of questions for testing.
The layers of the taxonomy form a pyramid (higher numbers mean a higher and more difficult level):
1. Memorizing knowledge
2. Understanding
3. Use, application, problem solving
4. Analysis of problems
5. Evaluation and synthesis
6. Creation of new values, decision-making
This approach makes it easier to find balanced relationships between different types of learning outcomes. From these, it is then possible to form the structure of lessons and define the content of testing as follows: 1. Define the result → choose the appropriate form and scope of teaching → adequately define the scope and form of testing.
Example
The instructor will write in detail:
- What should students remember? What should they be able to recall from long-term memory?
- What should students understand and what is the best way to demonstrate this?
- What should students be able to use? How should they be able to use knowledge and skills? To solve which problems?
- How should students be able to analyse a situation or problem or concept? How should they be able to describe the relationships between the individual components of the problem?
- What should students be able to evaluate? What conclusions should be drawn and on what rules should the conclusions be based?
- What should students be able to create and who should understand the output and for whom should it be used.
Diagnostic evaluation/testing
Definition
It shows instructors what students already know about the curriculum at the beginning of the lesson or course. Instructors evaluate the preparedness of students, their strengths and weaknesses, knowledge and skills before the instruction. They can then adjust it according to the results of the diagnostic evaluation.
Alternatively, the same can be done at the end of a lesson and the progress made during the lesson can be evaluated. Such an assessment does not affect grading.
Example
At the beginning of a lesson or course, students take a short quiz. The instructor adjusts the next part of the lesson according to the results.
At the end of the lesson or course, students will again take a short quiz, which will show whether and how they have improved during the lesson.
Formative evaluation/testing
Definition
Students take very short quizzes/tests throughout the course, ideally at the beginning or end of each lesson. Students and instructors receive quick feedback on whether the materials were understood, or when there was a gap in understanding. It is better to identify and address this immediately, especially in fields where “gaps” in education are not allowed and where continuing without prior understanding would be ineffective. This remedies the common phenomenon where students may incorrectly think that they understand things, finding out that this is not true in the exam when it is relatively too late. It also prevents situations where the educator believes that everything has been clearly explained and does not know that students did not understand.
This evaluation is not graded either. If it is graded, then only for training reasons and without affecting the grade for the course.
Example
At the beginning of a lesson, students answer three interactive questions via the online voting platform polleverywhere.com (or another, e.g. socrative.com, mentimeter.com), testing the knowledge from the previous lesson. The instructor immediately sees that 35% of the students failed one of the three conceptual questions from the past. The instructor, therefore, devotes 4 to 5 minutes explaining the correct solution and incorrect answers, and then knows that they can follow up with additional information.
Summative evaluation/testing
Definition
Tests and other methods of evaluation, the score of which is the basis for credit or an exam and the evaluation of academic success. It usually takes place on scheduled dates after the end of a teaching unit.
Example
To obtain credit, students should exceed a score of 80% for three tests during the semester.
Or: a written test, where a minimum success rate of 80% is required to proceed to the oral part of the exam.
Test reliability
Definition
A measure of test consistency and accuracy. Does the test provide similar results, even when assessed by different evaluators? Is it reasonably long, but not inefficiently too long? Does it contain items with a high level of discriminatory power?
Example
The test has two variations. We verified that the same group of students achieves a similar score in both variations.
We have verified that the test results are independent of the evaluator.
The Cronbach’s alpha coefficient of our test is around the desired value of 0.8. This means that if we divided the test into smaller parts, the results would be correlated with a coefficient of approximately 0.8. If the value is >0.8, the questions can be deleted, because adding them no longer provides information (we typically want 20 to 40 questions). If the value is <0.8, this means that we are probably testing more areas at the same time – do we really want to do this?
Test validity/evaluation validity
Definition
What is the significance of a test score? Does it really correspond to what the test was supposed to measure?
Example
A validity test is compiled with precisely formulated learning outcomes, which are available to all instructors and students. There is no doubt about what belongs in the test and what does not, because it directly tests specific, achievable and measurable learning outcomes.
Test item analysis
Definition
A statistical analysis of test results which helps to identify confusing, misleading, poorly formulated, or otherwise problematic issues that reduce the reliability and validity of a test.
It is incorporated directly into the Moodle environment.
In this statistical analysis, we cannot replace the professional assessment of the test questions with another instructor.
Example
We collected 500 attempts to pass the test with a databank consisting of 500 questions. With 40 questions mixed from the question databank in Moodle per test, this gives you the opportunity to perform a statistical analysis of the test in Moodle. We know the facility index and the discrimination index for each individual question in our tests. We change the questions so that both indexes are acceptable and intended values.
Facility index (FI) for test questions
Definition
The percentage of students who answer a question correctly.
Example
Question 15 is answered correctly by 80% of the students.
Question 19 is answered correctly by 100% of the students. We check to see if this is not too easy for the purposes of the course, and if so, we replace it with a more difficult one. However, we can also keep the question, for example, if it tests something essential that each participant must absolutely know and shows that we have learned it correctly.
Question 23 is answered correctly by only 6% of the students. Why is this? Was it our intention when making the test? Is there a problem with the question or in our teaching? Do we have corresponding assignments in the learning outcomes?
Discrimination index (DI) for test questions
Definition
Correlation between the score of a question and the test as a whole. It provides information on how much the question distinguishes between the overall successful (“good”) and unsuccessful (“bad”) students. Beware of low or even negative values!
Example
Question 6 has a DI of 85%, fulfils its function in the test, and is likely unproblematic.
Question 5 has a DI of only 5%. We check why it has such a low correlation with the results of the whole test, and we can decide to reformulate it.
Question 9 has a DI of -7%, i.e. the more successful a student is, the worse they answer it. The question almost certainly contains a problem that confuses especially well-prepared (well-read and imaginative) students
Start/stop/continue feedback
Definition
Getting feedback from students by answering three questions:
1. What are you missing in class? What should be added or started in your opinion?
2. In your opinion, what do we spend too much time on and what should be stopped (what should be limited)?
3. What suits you, works fine, and should be continued?
Example
During the lessons (a third of the way or halfway through the semester), we ask students these questions using an open-answer form (Moodle, Microsoft Forms, Google forms, polleverywhere.com…). We encourage them to think briefly about the teaching so far and write what they would like to add to the rest of the semester, what to stop, and what to continue.
Feedback takes about two minutes for each question, i.e. approx. six minutes. We don’t necessarily have to do everything that the students request (for example, they don’t have an understanding of the entire subject), but we can get good ideas and find out what works for them and what doesn’t.
Evidence-based teaching
Definition
Any method of teaching whose effectiveness is supported by data from well-controlled and published studies or the conclusions of cognitive sciences. Thus, we prefer procedures that are most effective in the work of instructors and have the greatest and most favourable impact on the education of students.
Example
Examples of principles for which there is the strongest evidence:
- To spend time in teaching especially with the active working of students. Not passively lecturing to them.
- Cultivate frequent feedback.
- Jointly solving well-prepared problems and discussing them among students will result in high retention of the material.
- Concrete learning outcomes are motivating and prevent many problems and prolong active learning.
- Testing should be based on clear and transparent learning outcomes.
Blind spot of experts
Definition
Experts may not perceive the difficulty of a step in the learning process, because, with their education and practice over the years, it has become taken for granted. From the students’ point of view, however, this can be a difficult obstacle to overcome.
Example
The instructor says, “This simple equation is self-explanatory, as is its application.” However, many students are unable to master and understand the logic and meaning of the equation without adequate demonstration
Protocol for the distribution of activities during teaching
Definition
Using a protocol completed by an independent observer, we measure and describe what the instructor and students actually spend time on during the course and throughout the entire period of training and instruction.
Objective results may differ surprisingly from our intentions or notions. An example of the protocol can be found here: https://www.cwsei.ubc.ca/resources/COPUS.htm
Example
What do student do:
- Listen
- Think (individually)
- Discuss, ask questions
- Work in groups
- Answer questions
- Present
- Are tested
- Wait
- Other
What do instructors do:
- Lecture
- Write, draft
- Provide feedback
- Answer questions
- Assist, advise
- Consult individually
- Demonstrate
- Wait, resolve technical problems
- Other