In
some
contexts
,
well-formed
natural
language
cannot
be
expected
as
input
to
information
or
communication
systems
.
In
these
contexts
,
the
use
of
grammar-independent
input
(
sequences
of
uninflected
semantic
units
like
e.g.
language-independent
icons
)
can
be
an
answer
to
the
users
'
needs
.
However
,
this
requires
that
an
intelligent
system
should
be
able
to
interpret
this
input
with
reasonable
accuracy
and
in
reasonable
time
.
Here
we
propose
a
method
allowing
a
purely
semantic-based
analysis
of
sequences
of
semantic
units
.
It
uses
an
algorithm
inspired
by
the
idea
of
"
chart
parsing
"
known
in
Natural
Language
Processing
,
which
stores
intermediate
parsing
results
in
order
to
bring
the
calculation
time
down
.
Introduction
As
the
mass
of
international
communication
and
exchange
increases
,
icons
as
a
mean
to
cross
the
language
barriers
have
come
through
in
some
specific
contexts
of
use
,
where
language
independent
symbols
are
needed
(
e.g.
on
some
machine
command
buttons
)
.
The
renewed
interest
in
iconic
communication
has
given
rise
to
important
works
in
the
field
of
Design
(
Aicher
and
Krampen
,
1996
;
Dreyfuss
,
1984
;
Ota
,
1993
)
,
on
reference
books
on
the
history
and
development
of
the
matter
(
Frutiger
,
1991
;
Li-ungman
,
1995
;
Sassoon
and
Gaur
,
1997
)
,
as
well
as
newer
studies
in
the
fields
of
Human-Computer
Interaction
and
Digital
Media
(
Yazdani
and
Barker
,
2000
)
or
Semiotics
(
Vaillant
,
1999
)
.
We
are
here
particularly
interested
in
the
field
of
Information
Technology
.
Icons
are
now
used
in
nearly
all
possible
areas
of
human
computer
interaction
,
even
office
software
or
operating
systems
.
However
,
there
are
contexts
where
richer
information
has
to
be
managed
,
for
instance
:
Alternative
&amp;
Augmentative
Communication
systems
designed
for
the
needs
of
speech
or
language
im
-
paired
people
,
to
help
them
communicate
(
with
icon
languages
like
Minspeak
,
Bliss
,
Commun-I-Mage
)
;
Second
Language
Learning
systems
where
learners
have
a
desire
to
communicate
by
themselves
,
but
do
not
master
the
structures
of
the
target
language
yet
;
Cross-Language
Information
Retrieval
systems
,
with
a
visual
symbolic
input
.
In
these
contexts
,
the
use
of
icons
has
many
advantages
:
it
makes
no
assumption
about
the
language
competences
of
the
users
,
allowing
impaired
users
,
or
users
from
a
different
linguistic
background
(
which
may
not
include
a
good
command
of
one
of
the
major
languages
involved
in
research
on
natural
language
processing
)
,
to
access
the
systems
;
it
may
trigger
a
communication-motivated
,
implicit
learning
process
,
which
helps
the
users
to
gradually
improve
their
level
of
literacy
in
the
target
language
.
However
,
icons
suffer
from
a
lack
ofexpres-sive
power
to
convey
ideas
,
namely
,
the
expression
of
abstract
relations
between
concepts
still
requires
the
use
of
linguistic
communication
.
An
approach
to
tackle
this
limitation
is
to
try
to
"
analyse
"
sequences
of
icons
like
natural
language
sentences
are
parsed
,
for
example
.
However
,
icons
do
not
give
grammatical
information
as
clues
to
automatic
parsers
.
Hence
,
we
have
defined
a
method
to
interpret
sequences
of
icons
by
implementing
the
use
of
"
natural
"
semantic
knowledge
.
This
method
allows
to
build
knowledge
networks
from
icons
as
is
usually
done
from
text
.
The
analysis
method
that
will
be
presented
here
is
logically
equivalent
to
the
parsing
of
a
dependency
grammar
with
no
locality
constraints
.
Therefore
,
the
complexity
of
a
fully
recursive
parsing
method
grows
more
than
exponentially
with
the
length
of
the
input
.
This
makes
the
reaction
time
of
the
system
too
long
to
be
acceptable
in
normal
use
.
We
have
now
defined
a
new
parsing
algorithm
which
stores
intermediate
results
in
"
charts
"
,
in
the
way
chart
parsers
(
Earley
,
1970
)
do
for
natural
language
.
1
Description
of
the
problem
Assigning
a
signification
to
a
sequence
of
information
items
implies
building
conceptual
relations
between
them
.
Human
linguistic
competence
consists
in
manipulating
these
dependency
relations
:
when
we
say
that
the
cat
drinks
the
milk
,
for
example
,
we
perceive
that
there
are
well-defined
conceptual
connections
between
'
cat
'
,
'
drink
'
,
and
'
milk
'
—
that
'
cat
'
and
'
milk
'
play
given
roles
in
a
given
process
.
Symbolic
formalisms
in
AI
(
Sowa
,
1984
)
reflect
this
approach
.
Linguistic
theories
have
also
been
developed
specifically
to
give
account
of
these
phenomena
(
Tesnière
,
1959
;
Kunze
,
1975
;
Mel'cuk
,
1988
)
,
and
to
describe
the
transition
between
semantics
and
various
levels
of
syntactic
description
:
from
deep
syntactic
structures
which
actually
reflect
the
semantics
contents
,
to
the
surface
structure
whereby
messages
are
put
into
natural
language
.
Human
natural
language
reflects
these
conceptual
relations
in
its
messages
through
a
series
of
linguistic
clues
.
These
clues
,
depending
on
the
particular
languages
,
can
consist
mainly
in
word
ordering
in
sentence
patterns
(
"
syntactical
"
clues
,
e.g.
in
English
,
Chinese
,
or
Creole
)
,
in
word
inflection
or
suffixation
(
"
morphological
"
clues
,
e.g.
in
Russian
,
Turkish
)
,
or
in
a
given
blend
of
both
(
e.g.
in
German
)
.
Parsers
are
systems
designed
to
analyse
natural
language
input
,
on
the
base
of
such
clues
,
and
to
yield
a
representation
of
its
informational
contents
.
Syntactical
analysis
based
on
word
order
accusative
:
agent
nominative
:
object
Morphological
analysis
based
on
word
inflexion
In
contexts
where
icons
have
to
be
used
to
convey
complex
meanings
,
the
problem
is
that
morphological
clues
are
of
course
not
available
,
when
at
the
same
time
we
cannot
rely
on
a
precise
sentence
pattern
.
We
thus
should
have
to
use
a
parser
based
on
computing
the
dependencies
,
such
as
some
which
have
been
written
to
cope
with
variable-word-order
languages
(
Covington
,
1990
)
.
However
,
since
no
morphological
clue
is
available
either
to
tell
that
an
icon
is
,
e.g.
,
accusative
or
dative
,
we
have
to
rely
on
semantic
knowledge
to
guide
role
assignment
.
In
other
words
,
an
icon
parser
has
to
know
that
drinking
is
something
generally
done
by
living
beings
on
liquid
objects
.
2
The
semantic
analysis
method
The
icon
parser
we
propose
performs
semantic
analysis
of
input
sequences
of
icons
by
the
use
of
an
algorithm
based
on
best-unification
:
when
an
icon
in
the
input
sequence
has
a
"
predicative
"
structure
(
it
may
become
the
head
of
at
least
one
dependency
relation
to
another
node
,
labeled
"
actor
"
)
,
the
other
icons
around
it
are
checked
for
compatibility
.
Compatibility
is
measured
as
a
unification
score
between
two
sets
of
feature
structures
:
the
intrinsic
semantic
features
of
the
candidate
actor
,
and
the
"
extrinsic
"
semantic
features
of
the
predicative
icon
attached
to
a
particular
semantic
role
(
i.e.
the
properties
"
expected
"
from
,
say
,
the
agent
of
kiss
,
the
direct
object
of
drink
,
or
the
concept
qualified
by
the
adjective
fierce
)
.
The
result
yielded
by
the
semantic
parser
is
the
graph
that
maximizes
the
sum
of
the
compatibilities
of
all
its
dependency
relations
.
It
constitutes
,
with
no
particular
contextual
expectations
,
and
given
the
state
of
world
knowledge
stored
in
the
iconic
database
in
the
form
of
semantic
features
,
the
"
best
"
interpretation
of
the
users
'
input
.
lT
{
Si
)
=
Fi
(
where
Fj
is
a
set
of
simple
Attribute-Value
semantic
features
,
used
to
represent
intrinsic
features
of
the
concept
—
like
{
&lt;
human
,
+1
&gt;
,
&lt;
male
,
+1
&gt;
}
for
Daddy
)
.
Some
of
the
symbols
also
have
selectional
features
,
which
,
if
grouped
by
case
type
,
form
a
case
structure
:
CS
(
si
)
=
{
{
a
,
Fa
)
,
&lt;
c2
,
Fi2
)
,
.
.
.
(
cn
,
Fin
)
}
(
where
each
of
the
is
a
case
type
such
as
agent
,
object
,
goal
.
.
.
,
and
each
F^
a
set
of
simple
Attribute-Value
semantic
features
,
used
to
determine
what
features
are
expected
from
a
given
case-filler
—
e.g.
&lt;
human
,
+1
&gt;
is
a
feature
that
the
agent
of
the
verb
write
should
possess
)
.
Every
couple
(
cj
,
Fij
)
present
in
the
case
structure
means
that
is
a
set
of
Attribute-Value
cou
-
ples
which
are
attached
to
as
selectional
features
for
the
case
:
For
example
,
we
can
write
:
The
semantic
compatibility
is
the
value
we
seek
to
maximize
to
determine
the
best
assignments
.
At
the
feature
level
(
compatibility
between
two
features
)
,
it
is
defined
so
as
to
"
match
"
extrinsic
and
intrinsic
features
.
This
actually
includes
a
somehow
complex
definition
,
taking
into
account
the
modelling
of
conceptual
inheritance
between
semantic
features
;
but
for
the
sake
of
simplicity
in
this
presentation
,
we
may
assume
that
the
semantic
compatibility
at
the
semantic
feature
level
is
defined
as
in
Eq
.
1
,
which
would
be
the
case
for
a
"
flat
"
ontol
-
ogy1
.
At
the
feature
structure
level
,
i.e.
where
the
semantic
contents
of
icons
are
defined
,
semantic
compatibility
is
calculated
between
two
homogeneous
sets
of
Attribute-Value
couples
:
on
one
side
the
se-lectional
features
attached
to
a
given
case
slot
of
the
predicate
icon
—
stripped
here
ofthe
case
type
—
,
on
the
other
side
the
intrinsic
features
of
the
candidate
icon
.
The
basic
idea
here
is
to
define
the
compatibility
as
the
sum
of
matchings
in
the
two
sets
of
attribute-value
pairs
,
in
ratio
to
the
number
of
features
being
compared
to
.
It
should
be
noted
that
semantic
compatibility
is
not
a
symmetric
norm
:
it
has
to
measure
how
good
the
candidate
actor
fills
the
expectations
of
a
given
predicative
concept
in
respect
to
one
of
its
particular
cases
.
Hence
there
is
a
filtering
set
(
ST
)
and
a
filtered
set
(
IT
)
,
and
it
is
the
cardinal
of
the
filtering
set
which
is
used
as
denominator
:
(
where
the
/
ij
and
the
f2j
are
simple
features
of
the
form
and
,
respectively
)
.
A
threshold
of
acceptability
is
used
to
shed
out
improbable
associations
without
losing
time
.
Even
with
no
grammar
rules
,
though
,
it
is
necessary
to
take
into
account
the
distance
between
two
1The
difference
in
computing
time
may
be
neglected
in
the
following
reasoning
,
since
the
actual
formula
taking
into
account
inheritance
involves
a
maximum
number
of
computing
steps
depending
on
the
depth
ofthe
semantic
features
ontology
,
which
does
not
vary
during
the
processing
.
icons
in
the
sequence
,
which
make
it
more
likely
that
the
actor
of
a
given
predicate
should
be
just
before
or
just
after
it
,
than
four
icons
further
,
out
of
its
context
.
Hence
we
also
introduce
a
"
fading
"
function
,
to
weight
the
virtual
semantic
compatibility
of
a
candidate
actor
to
a
predicate
,
by
its
actual
distance
to
the
predicate
in
the
sequence
:
is
the
value
ofthe
assignmentofcan-didate
icon
S
/
.
as
filler
of
the
role
c3
-
of
predicate
s
,
;
and
the
(
virtual
)
semantic
compatibility
of
the
intrinsic
features
of
to
the
selectional
features
of
for
the
case
,
with
no
consideration
of
distance
(
as
defined
in
Eq
.
2
)
.
Eventually
a
global
assignment
of
actors
(
chosen
among
those
present
in
the
context
)
to
the
case
slots
of
the
predicate
,
has
to
be
determined
.
An
assignment
is
an
application
of
the
set
of
icons
(
other
than
the
predicate
being
considered
)
into
the
set
of
cases
of
the
predicate
.
The
semantic
compatibility
of
this
global
assignment
is
defined
as
the
sum
of
the
values
(
as
defined
in
Eq
.
3
)
of
the
individual
case-filler
allotments
.
For
a
sequence
of
icon
containing
more
than
one
predicative
symbol
,
the
calculus
of
the
assignments
is
done
for
every
one
of
them
.
A
global
interpretation
of
the
sequence
is
a
set
of
assignments
for
every
predicate
in
the
sequence
.
3
Complexity
of
a
recursive
algorithm
In
former
works
,
this
principle
was
implemented
by
a
recursive
algorithm
(
purely
declarative
Prolog
)
.
Then
,
for
a
sequence
of
concepts
,
and
supposing
we
have
the
(
mean
value
of
)
(
valency
)
roles
to
fill
for
every
predicate
,
let
us
evaluate
the
time
we
need
to
compute
the
possible
interpretations
of
the
sequence
,
when
we
are
in
the
worst
case
,
i.e.
the
N
icons
are
all
predicates
.
For
every
assignment
,
the
number
of
semantic
compatibility
values
corresponding
to
a
single
role
/
filler
allotment
,
on
an
(
actor
,
candidate
)
couple
(
i.e.
at
the
feature
structure
level
,
as
defined
in
Eq
.
2
)
is
:
.
For
every
icon
,
the
number
of
possible
assignments
is
:
(
N-l-V
)
l
(
we
suppose
that
N
—
1
&gt;
V
,
because
we
are
only
interested
in
what
happens
when
becomes
big
,
and
V
typically
lies
around
3
)
.
For
every
assignment
,
the
N
—
1
allotment
possibilities
for
the
first
case
are
computed
only
once
.
Then
,
for
every
possibility
of
allotment
of
the
first
case
,
the
possibilities
for
the
second
case
are
recomputed
—
hence
,
there
are
(
iV^l
)
2
calculations
of
role
/
filler
allotment
scores
for
the
second
case
.
Similarly
,
every
possible
allotment
for
the
third
case
is
recomputed
for
every
possible
choice
set
on
the
first
two
cases
—
so
,
there
are
(
N
—
l
)
3
computations
on
the
whole
for
the
third
case
.
This
goes
on
until
the
case
.
In
the
end
,
for
one
single
assignment
,
the
number
of
times
a
case
/
filler
score
has
been
computed
is
Then
,
to
compute
all
the
possible
interpretations
:
Number
of
times
the
system
computes
every
possible
assignment
of
the
first
icon
:
1
.
Number
of
times
the
system
computes
every
possible
assignment
of
the
second
icon
:
(
once
for
every
assignment
of
the
first
icon
,
backtracking
every
time
—
still
supposing
we
are
in
the
worst
case
,
i.e.
all
the
assignments
pass
over
the
acceptability
threshold
)
.
Number
of
times
the
system
computes
every
possible
assignment
of
the
third
icon
:
(
once
for
every
possible
assignment
of
the
second
icon
,
each
of
them
being
recomputed
once
again
for
every
possible
assignment
of
the
first
icon
)
.
(
.
.
.
)
Number
of
times
the
system
computes
every
possible
assignment
of
the
icon
:
Number
of
assignments
computed
on
the
whole
:
every
assignment
of
the
first
icon
(
there
are
of
them
)
is
computed
just
once
,
since
it
is
at
the
beginning
of
the
backtracking
chain
;
every
assignment
of
the
second
icon
is
computed
N
~
~
lPy
times
for
every
assignment
of
the
first
icon
,
Nth
icon
is
computed
mputed
times
.
Total
number
of
assignment
calculations
:
Every
calculation
of
an
assignment
value
involves
,
as
we
have
seen
,
calculations
of
a
semantic
compatibility
at
a
feature
structure
level
.
So
,
totally
,
for
the
calculation
of
all
possible
interpretations
of
the
sentence
,
the
number
of
such
calculations
has
been
:
Lastly
,
the
final
scoring
of
every
interpretation
involves
summing
the
scores
ofthe
assignments
,
which
takes
up
elementary
(
binary
)
sums
.
This
sum
is
computed
every
time
an
interpretation
is
set
,
i.e.
every
time
the
system
reaches
a
leaf
of
the
choice
tree
,
i.e.
every
time
an
assignment
for
the
icon
is
reached
,
that
is
times
.
So
,
there
is
an
additional
computing
time
which
also
is
a
function
of
,
namely
,
expressed
in
number
of
elementary
sums
:
Hence
,
if
we
label
the
ratio
of
the
computing
time
used
to
compute
the
score
of
a
role
/
filler
allotment
to
the
computing
time
of
an
elementary
binary
sum2
,
the
number
of
elementary
operations
involved
in
computing
the
scores
of
the
interpretations
of
the
whole
sequence
is
:
4
The
chart
algorithm
To
avoid
this
major
impediment
,
we
define
a
new
algorithm
which
stores
the
results
of
the
low-level
operations
uselessly
recomputed
at
every
backtrack
:
2a
is
a
constant
in
relation
to
N
:
the
computation
of
the
semantic
compatibility
at
the
feature
structure
level
,
defined
in
Eq
.
2
,
roughly
involves
computations
of
the
semantic
compatibility
at
the
feature
level
,
defined
in
Eq
.
1
(
n
being
the
average
number
of
selectional
features
for
a
given
role
on
a
given
predicate
,
and
m
the
average
number
of
intrinsic
features
of
the
entries
in
the
semantic
lexicon
)
,
which
itself
involves
a
sequence
of
elementary
operations
(
comparisons
,
floating
point
number
multiplication
)
.
It
does
not
depend
on
,
the
number
of
icons
in
the
sequence
.
a.
The
low-level
role
/
filler
compatibility
values
,
in
a
chart
called
'
compatibil-ity_table
'
.
The
values
stored
here
correspond
to
the
values
defined
at
Eq
.
b.
The
value
of
every
assignment
,
in
'
assign-ments_table
'
.
The
values
stored
here
correspond
to
assignments
of
multiple
case
slots
of
a
predicate
,
as
defined
at
point
3
of
Section
2
;
they
are
the
sum
of
the
values
stored
at
level
(
a
)
,
multiplied
by
a
fading
function
of
the
distance
between
the
icons
involved
.
c.
The
value
ofthe
interpretations
ofthe
sentence
,
in
'
interpretations_table
'
.
The
values
stored
here
correspond
to
global
interpretations
of
the
sentence
,
as
defined
at
point
4
of
Section
2
.
With
this
system
,
at
level
(
b
)
(
calculation
of
the
values
of
assignments
)
,
the
value
of
the
role
/
filler
couples
are
re-used
from
the
compatibility
table
,
and
are
not
recomputed
many
times
.
In
the
same
way
,
at
level
(
c
)
,
the
computation
of
the
interpretations
'
values
by
adding
the
assignments
'
values
does
not
recompute
the
assignments
values
at
every
step
,
but
simply
uses
the
values
stored
in
the
assignments
table
.
Furthermore
,
the
system
has
been
improved
for
the
cases
where
only
partial
modifications
are
done
to
the
graph
,
e.g.
when
the
users
want
to
perform
an
incremental
generation
,
by
generating
the
graph
again
at
every
new
icon
added
to
the
end
of
the
sequence
;
or
when
they
want
to
delete
one
of
the
icons
of
the
sequence
only
,
optionally
to
replace
it
by
another
one
.
In
these
cases
,
a
great
part
of
the
information
remains
unchanged
.
To
take
this
property
into
account
,
the
system
stores
the
current
sequence
and
the
charts
resulting
from
the
parse
in
memory
,
allowing
them
to
be
only
partially
replaced
afterwards
.
Finally
,
we
have
implemented
three
basic
interface
functions
to
be
performed
by
the
parser
.
The
first
one
implements
a
full
parse
,
the
second
partially
re-parses
a
sequence
where
new
icons
have
been
added
,
the
third
partially
re-parses
a
sequence
where
icons
have
been
removed
.
The
three
functions
can
be
described
as
follows
.
Parsing
from
scratch
:
Spot
the
icons
in
the
new
sequence
which
are
potential
predicates
(
which
have
a
valency
frame
)
.
Run
through
the
sequence
and
identify
every
possible
pair
predicate
,
role
,
candidate
.
For
each
one
of
them
,
calculate
the
semantic
compatibility
Store
all
the
values
found
in
compatibil-ity_table
:
candidate
1
predicate
1
candidate
2
predicate
k
candidate
N
(
and
eliminate
values
under
the
threshold
as
soon
as
they
appear
)
.
Go
through
the
sequence
and
identify
the
set
of
possible
assignments
for
each
predicate
.
For
every
assignment
,
compute
its
score
using
the
values
stored
in
compatibil-ity_table
,
and
multiplying
by
the
fading
coefficients
Z
&gt;
(
1
)
,
Z
&gt;
(
2
)
,
.
.
.
Store
the
values
found
in
:
assignments_table
(
Tab
.
1
)
.
Calculate
the
list
of
all
the
possible
interpretation
(
1
interpretation
is
1
sequence
of
assignments
)
.
Store
them
along
with
their
values
in
interpretations_table
.
Add
a
list
of
icons
to
the
currently
stored
sequence
:
Add
the
icons
of
list
of
icons
to
the
currently
stored
sequence
.
For
every
pair
predicate
,
role
,
candidate
.
where
either
the
predicate
,
or
the
candidate
,
is
a
new
icon
(
is
a
member
of
list
oficons
)
,
calculate
the
value
of
candidate
predicate
,
role
.
and
store
the
value
in
:
compatibility_table
.
Calculate
the
new
assignments
made
possible
by
the
new
icons
from
list
oficons
:
the
assignments
ofnew
predicates
;
for
every
predicate
already
present
in
the
sequence
before
,
the
assignments
where
at
least
one
of
the
roles
is
allotted
to
one
of
the
icons
of
list
oficons
.
role
,
candidate
,
.
{
predicate
k
Table
1
:
Assignments
Table
For
each
of
them
,
calculate
its
value
,
and
store
it
in
assignments_table
.
Recompute
the
table
of
interpretations
totally
(
no
get-around
)
.
Remove
a
list
of
icons
from
the
currently
stored
sequence
:
Remove
the
icons
of
list
oficons
from
the
sequence
stored
in
memory
.
Remove
the
entries
of
compatibil-ity_table
or
assignments_table
involving
at
least
one
of
the
icons
of
list
of
icons
.
Recompute
the
table
ofinterpretations
.
5
Complexity
of
the
chart
algorithm
First
,
let
us
evaluate
the
complexity
of
the
algorithm
presented
in
Section
4
assuming
that
only
the
first
interface
function
is
used
(
parsing
from
scratch
every
time
a
new
icon
is
added
to
the
sequence
)
.
In
the
worst
case
:
the
icons
are
all
predicates
;
no
possible
role
/
filler
allotment
in
the
whole
sequence
is
below
the
threshold
of
acceptability
.
For
every
predicate
,
every
combination
between
one
single
role
and
one
single
other
icon
in
the
sequence
is
evaluated
:
there
are
such
possible
couples
•
Since
there
are
(
worst
case
)
N
predicates
,
there
are
such
combinations
to
compute
for
the
whole
sequence
,
in
order
to
fill
the
compatibility
table
.
•
After
the
compatibility
table
has
been
filled
,
its
values
are
used
to
compute
the
score
of
every
possible
assignment
(
of
surrounding
icons
)
for
every
predicate
(
to
its
case
roles
)
.
Computing
the
score
of
an
assignment
involves
summing
values
of
the
compatibility
table
,
multiplied
by
a
value
of
the
fading
function
,
typically
for
a
small
integer
.
Thus
,
for
every
line
in
the
assignments
table
(
Table
1
)
,
the
computing
time
is
constant
in
respect
to
.
For
every
predicate
,
there
are
possible
assignments
(
see
Section
3
)
.
Since
there
are
predicates
,
there
is
a
total
number
(
in
the
worst
case
)
of
different
possible
assignments
,
i.e.
different
lines
to
fill
in
the
assignments
table
.
So
,
the
time
to
fill
the
assignment
table
in
relation
to
N
is
multiplied
by
a
constant
factor
.
•
After
the
assignments
table
has
been
filled
,
its
values
are
used
to
compute
the
score
of
the
possible
interpretations
of
the
sentence
.
The
computation
of
the
score
of
every
single
interpretation
is
simply
a
sum
of
scores
of
assignments
:
since
there
possibly
are
predicates
,
there
might
be
up
to
N
figures
to
sum
to
compute
the
score
of
an
interpretation
.
•
An
interpretation
is
an
element
of
the
cartesian
product
of
the
sets
of
all
possible
assignments
for
every
predicate
.
Since
every
one
of
these
sets
has
elements
,
there
is
a
total
number
of
interpretations
to
compute
.
As
each
computation
might
involve
elementary
sums
(
there
are
N
figures
to
sum
up
)
,
we
may
conclude
that
the
time
to
fill
the
interpretations
table
is
in
a
relation
to
which
may
be
written
so
:
.
In
the
end
,
the
calculation
time
is
not
the
product
,
but
the
sum
,
of
the
times
used
to
fill
each
of
the
tables
.
So
,
if
we
label
a
and
6
two
constants
,
representing
,
respectively
,
the
ratio
of
the
computing
time
used
to
get
the
score
of
an
elementary
role
/
filler
allotment
to
the
computing
time
of
an
elementary
binary
addition
,
and
the
ratio
of
the
computing
time
used
to
get
the
score
of
an
assignment
from
the
scores
of
the
role
/
filler
allotments
(
adding
up
V
of
them
,
multiplied
by
values
of
the
D
function
)
,
to
the
computing
time
of
an
elementary
binary
addition
,
the
total
computing
time
for
calculating
the
scores
of
all
possible
interpretations
of
the
sentence
is
:
6
Discussion
We
have
presented
a
new
algorithm
for
a
completely
semantic
parse
of
a
sequence
of
symbols
in
a
graph-based
formalism
.
The
new
algorithm
has
a
temporal
complexity
like
in
Eq
.
6
,
to
be
compared
to
the
complexity
of
a
purely
recursive
algorithm
,
like
in
Eq
.
In
the
worst
case
,
the
second
function
is
still
dominated
by
a
function
which
grows
hyperexponen-tially
in
relation
to
:
the
number
of
possible
interpretations
multiplied
by
the
time
used
to
sum
up
the
score
of
an
interpretation3
.
In
practice
,
the
values
of
the
parameters
and
are
fairly
large
,
so
this
member
is
still
small
during
the
irst
steps
,
but
it
grows
very
quickly
.
As
for
the
other
member
of
the
function
,
it
is
hy-perexponential
in
the
case
of
Eq
.
5
,
whereas
it
is
of
order
bN
(
N
~
lPv
)
,
i.e.
it
is
0
{
Nv+l
)
,
in
the
case
of
Eq
.
Practically
,
to
make
the
semantic
parsing
algorithm
acceptable
,
the
problem
ofthe
hyperexponen-tial
growth
of
the
number
of
interpretations
has
to
be
eliminated
at
some
point
.
In
the
system
we
have
implemented
,
a
threshold
mechanism
allows
to
reject
,
for
every
predicate
,
the
unlikely
assignments
.
This
practically
leaves
up
only
a
small
maximum
number
of
assignments
in
the
assignments
table
,
for
every
predicate
—
typically
3
.
This
means
that
the
number
of
interpretations
is
no
longer
of
the
order
of
N
~
lPv
)
N
,
but
"
only
"
of
3^
:
it
becomes
"
simply
"
exponential
.
This
implementation
mechanism
makes
the
practical
computing
time
acceptable
when
running
on
an
average
computer
for
input
sequences
of
no
more
than
approximately
15
symbols
.
In
order
to
give
a
comprehensive
solution
to
the
problem
,
future
developments
will
try
to
develop
heuristics
to
ind
out
the
best
solutions
without
having
to
compute
the
whole
list
of
all
possible
interpretations
and
sort
it
by
decreasing
value
of
semantic
compatibility
.
For
example
,
by
trying
to
explore
the
search
space
(
of
all
possible
interpreta
-
tions
)
from
maximum
values
of
the
assignments
,
it
may
be
possible
to
generate
only
the
10
or
20
best
interpretations
without
having
to
score
all
of
them
to
start
with
.
