The MIT License (MIT)

Copyright (c) 2014 CNRS

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

AUTHORS
Hervé Bredin -- http://herve.niderb.fr

Evaluation metrics

In [1]:
from pyannote.core import Annotation, Segment
reference = Annotation()
reference[Segment(0, 1)] = 'SHELDON'
reference[Segment(1, 2)] = 'PENNY'
reference[Segment(3, 4)] = 'LEONARD'
reference[Segment(4, 6)] = 'SHELDON'
reference
Out[1]:
In [2]:
hypothesis = Annotation()
hypothesis[Segment(0.2, 1.2)] = 'SHELDON'
hypothesis[Segment(1.2, 1.9)] = 'PENNY'
hypothesis[Segment(2.8, 4)] = 'LEONARD'
hypothesis[Segment(4, 5)] = 'SHELDON'
hypothesis[Segment(5.1, 6)] = 'RAJ'
hypothesis
Out[2]:

Evaluation of speech activity detection

Speech activity detection results are reported using three complementary evaluation metrics.

The Detection error rate (DER) is the ratio of the duration incorrectly classified as speech (false alarm) or non-speech (missed detection) over the total duration of speech in the episode:

$\displaystyle \text{DER} = \frac{\text{miss} + \text{fa}}{\text{total}}$

where:

  • $\text{total}$ is the total duration of speech according to the reference annotation,
  • $\text{miss}$ is the total duration of segments incorrectly classified as non-speech,
  • $\text{fa}$ is the total duration of segments incorrectly classified as speech,
In [3]:
from pyannote.metrics.detection import DetectionErrorRate
detectionErrorRate = DetectionErrorRate()
d = detectionErrorRate(reference, hypothesis)
print 'Detection error rate: {d:.1f}%'.format(d=100*d)
Detection error rate: 12.0%

Using detailed=True, we get more details:

In [4]:
details = detectionErrorRate(reference, hypothesis, detailed=True)
details
Out[4]:
{u'detection error rate': 0.11999999999999997,
 u'false alarm': 0.20000000000000018,
 u'miss': 0.39999999999999974,
 u'total': 5.0}

Precision is the ratio of the total duration reported as speech that is indeed annotated as speech in the reference annotation.

In [5]:
from pyannote.metrics.detection import DetectionPrecision
detectionPrecision = DetectionPrecision()
p = detectionPrecision(reference, hypothesis)
print 'Detection precision: {p:.1f}%'.format(p=100*p)
Detection precision: 95.8%

Recall is the ratio of the total duration of speech according to the reference annotation that is is indeed detected as speech in the hypothesis.

In [6]:
from pyannote.metrics.detection import DetectionRecall
detectionRecall = DetectionRecall()
r = detectionRecall(reference, hypothesis)
print 'Detection recall: {r:.1f}%'.format(r=100*r)
Detection recall: 92.0%

In [7]:
from pyannote.metrics import f_measure
print 'Detection f-measure: {f:.1f}%'.format(f=100*f_measure(p, r))
Detection f-measure: 93.9%

Evaluation of speaker identification

Speaker identification results are reported using Identification error rate (IER), defined as follows:

$\displaystyle \text{IER} = \frac{\text{miss} + \text{fa} + \text{confusion}}{\text{total}}$

where

  • $\text{total}$ is the total duration of speech according to the reference annotation,
  • $\text{miss}$ is the total duration of segments incorrectly classified as non-speech,
  • $\text{fa}$ is the total duration of segments incorrectly classified as speech,
  • $\text{confusion}$ is the total duration of \emph{speech} segments whose detected label is incorrect.

In other words, it is a compound metric that accounts for both speech turns detection and identification errors.

In [8]:
from pyannote.metrics.identification import IdentificationErrorRate
identificationErrorRate = IdentificationErrorRate()
i = identificationErrorRate(reference, hypothesis)
print 'Identification error rate: {i:.1f}%'.format(i=100*i)
Identification error rate: 34.0%

Using detailed=True, we get more details:

In [9]:
identificationErrorRate(reference, hypothesis, detailed=True)
Out[9]:
{u'confusion': 1.1000000000000003,
 u'correct': 3.5,
 u'false alarm': 0.20000000000000018,
 u'identification error rate': 0.34,
 u'missed detection': 0.39999999999999974,
 u'total': 5.0}

Bonus

Standard precision and recall metrics are also available for identification.

In [10]:
from pyannote.metrics.identification import IdentificationPrecision
precision = IdentificationPrecision()
p = precision(reference, hypothesis)
print 'Identification precision: {p:.1f}%'.format(p=100*p)
Identification precision: 72.9%

In [11]:
from pyannote.metrics.identification import IdentificationRecall
recall = IdentificationRecall()
r = recall(reference, hypothesis)
print 'Identification recall: {r:.1f}%'.format(r=100*r)
Identification recall: 70.0%

An in-depth analysis or identification errors is also available.

In [12]:
from pyannote.metrics.errors.identification import IdentificationErrorAnalysis
identificationErrorAnalysis = IdentificationErrorAnalysis()
identificationErrorAnalysis.matrix(reference, hypothesis)
Out[12]:
reference hypothesis correct confusion false alarm missed detection LEONARD PENNY RAJ SHELDON
LEONARD 1 1.2 1.0 0.0 0.2 0.0 1 0.0 0.0 0.0
PENNY 1 0.7 0.7 0.2 0.0 0.1 0 0.7 0.0 0.2
SHELDON 3 2.0 1.8 0.9 0.0 0.3 0 0.0 0.9 1.8
  • column reference contains the speech duration of each speaker according to the reference
  • column hypothesis contains the speech duration of each speaker according to the hypothesis
  • column correct contains the duration of correct identification for each speaker
  • etc...
  • last four columns provide the detailed confusion matrix

Finally, here is the list of errors, sorted from the most to the least frequent, along with their duration.

In [13]:
identificationErrorAnalysis.annotation(reference, hypothesis).chart()
Out[13]:
[((u'correct', 'SHELDON', 'SHELDON'), 1.8),
 ((u'correct', 'LEONARD', 'LEONARD'), 1),
 ((u'confusion', 'SHELDON', 'RAJ'), 0.9000000000000004),
 ((u'correct', 'PENNY', 'PENNY'), 0.7),
 ((u'missed detection', 'SHELDON', None), 0.29999999999999966),
 ((u'false alarm', None, 'LEONARD'), 0.20000000000000018),
 ((u'confusion', 'PENNY', 'SHELDON'), 0.19999999999999996),
 ((u'missed detection', 'PENNY', None), 0.10000000000000009)]