close

Вход

Забыли?

вход по аккаунту

JP2012205242

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2012205242
An electronic device capable of appropriately controlling an audio device is provided. An
acquisition apparatus for acquiring an imaging result from at least one imaging apparatus
capable of imaging an image including a subject, and an imaging apparatus provided outside an
imaging range of the imaging apparatus according to an imaging result of the imaging apparatus.
And a control device for controlling the audio device. A detection device for detecting movement
information of the target person based on an imaging result is provided, and the control device
controls the voice device based on the detection result of the detection device. [Selected figure]
Figure 1
Electronic device and information transmission system
[0001]
The present invention relates to an electronic device and a communication system.
[0002]
There has been proposed a voice guidance apparatus for guiding a user by using voice (for
example, see Patent Document 1).
[0003]
JP, 2007-45565, A
[0004]
09-05-2019
1
However, the conventional voice guidance device has a problem that it is difficult to hear voice
unless it is from a specific place.
[0005]
The present invention has been made in view of the above problems, and an object thereof is to
provide an electronic device and an information transmission system capable of appropriately
controlling an audio device.
[0006]
The electronic device according to the present invention is the imaging device according to an
acquisition device (25) for acquiring an imaging result from at least one imaging device (11)
capable of imaging an image including a subject, and an imaging result of the imaging device.
And a control device (25) for controlling the audio device (12, 13) provided outside the imaging
range of the device.
[0007]
In this case, a detection device (25) for detecting movement information of the subject based on
an imaging result of the at least one imaging device is provided, and the control device is
configured to detect the voice device based on the detection result of the detection device. Can be
controlled.
In this case, when the control device determines that the target person has moved out of the
predetermined area based on the movement information detected by the detection device, or
when it is determined that the target person has moved out of the predetermined area. The audio
device can be controlled to alert the subject.
[0008]
In the electronic device of the present invention, the control device can control the audio device
when the at least one imaging device captures an image of a person different from the target
person.
09-05-2019
2
Also, the audio device can have a directional speaker.
In addition, a drive control device (25) may be provided to adjust the position and / or attitude of
the audio device.
In this case, the drive control device may adjust the position and / or posture of the audio device
in accordance with the movement of the subject.
[0009]
In the electronic device of the present invention, the at least one imaging device includes a first
imaging device and a second imaging device, and a part of an imaging range of the first imaging
device and the second imaging device The first and second imaging devices may be disposed so
as to overlap with a part of the imaging range.
[0010]
Further, the audio device includes a first audio device provided in an imaging range of the first
imaging device, and a second audio device provided in an imaging range of the second imaging
device. The device may control the second audio device when the first audio device is located
behind the subject.
In this case, the audio device is provided in the imaging range of the second imaging device and
the first audio device having the first microphone and the first speaker provided in the imaging
range of the first imaging device. And a second audio device including a second speaker, wherein
the control device is configured to capture the second speaker when the first imaging device
captures an image of the subject and a person different from the subject. It is good also as
control.
Further, when the first imaging device captures an image of the subject, the control device may
control the first microphone to collect voice of the subject.
[0011]
The electronic device according to the present invention includes a tracking device (25) for
09-05-2019
3
tracking the target person using an imaging result of the imaging device, and the tracking device
uses an image pickup device to image an image of a specific part of the target person. When
acquiring and using the image of the specific part as a template and tracking the target person,
the specific part of the target person is specified using the template, and a new part of the
specified specific part of the target person is identified. The template can be updated with
various images.
[0012]
In this case, the imaging device includes a first imaging device and a second imaging device
having an imaging range overlapping a part of the imaging range of the first imaging device, and
the tracking device includes: When the first imaging device and the second imaging device can
simultaneously image the target person, position information of a specific part of the target
person imaged by one imaging device is acquired, and the other imaging device is acquired. It is
good also as specifying the field corresponding to the position information on the abovementioned specific part among the pictures picturized by this, and making the picture of the
specified field into the template of the other imaging device.
In addition, the tracking device may determine abnormality of the target person when the size
information of the specific part changes by a predetermined amount or more.
[0013]
An information transmission system according to the present invention comprises at least one
imaging device (11) capable of imaging an image including a subject, an audio device (12, 13)
provided outside the imaging range of the imaging device, and And an electronic device (20).
[0014]
Note that although the present invention has been described above in association with the
reference numerals of the drawings representing one embodiment in order to explain the present
invention in an easy-to-understand manner, the present invention is not limited to this. May be
improved as appropriate, and at least a part may be replaced with another component.
Furthermore, the configuration requirements without particular limitation on the arrangement
09-05-2019
4
are not limited to the arrangements disclosed in the embodiments, and can be arranged at
positions where the functions can be achieved.
[0015]
The electronic device and the information transmission system of the present invention have an
effect that appropriate control of the audio device can be performed.
[0016]
It is a block diagram showing composition of a guidance system concerning one embodiment.
It is a figure which shows the specific structure of an imaging device.
It is a perspective view showing an audio unit. It is a hardware block diagram of a main-body
part. It is a functional block diagram of a main part. FIG. 6 (a) is a graph showing the relationship
between the distance from the front focus of the wide-angle lens system to the head of a person
(subject) and the size of the image (head portion), and FIG. It is the graph which converted the
graph of Fig.6 (a) into the height from the floor. It is a graph which shows the rate of change of
the size of a picture. FIG. 8A and FIG. 8B schematically show changes in the size of the head
according to the posture of the subject. It is a figure which shows the change of the size of the
image of the subject's head imaged by an image pick-up element according to a subject's
position. It is a figure which shows typically the relationship between one division in an office,
and the imaging region of the imaging device provided in the said division. It is a figure (the 1)
for explaining tracking processing of an object person. It is a figure (the 2) for demonstrating a
tracking process of a subject. It is a figure (the 3) for demonstrating a tracking process of a
subject. FIGS. 14 (a) and 14 (b) are diagrams for explaining the tracking process when four
subjects (subjects A, B, C, D) move in one section of FIG. (1). FIGS. 15 (a) to 15 (c) are diagrams
for explaining the tracking process when four subjects (subjects A, B, C, D) move in one section in
FIG. (2). It is a figure for demonstrating the control method of a directional speaker in case a
guide part is arrange ¦ positioned along a passage (hallway). It is a flow chart which shows
guidance processing in a guidance system.
[0017]
09-05-2019
5
Hereinafter, a guidance system concerning one embodiment is explained in detail based on
Drawings 1-17. The structure of the guidance system 100 is shown by the block diagram by FIG.
In addition, although the guidance system 100 can be installed in an office, a commercial facility,
an airport, a station, a hospital, an art museum, etc., in this embodiment, the case where the
guidance system 100 is installed in an office will be described as an example. Do.
[0018]
As shown in FIG. 1, the guiding system 100 includes a plurality of guiding parts 10a, 10b,..., A
card reader 88, and a main body part 20. In addition, although two guide parts 10a and 10b are
illustrated in FIG. 1, the number can be set according to an installation place. For example, FIG.
16 illustrates a state in which four guide portions 10a to 10d are installed in the passage. In
addition, each guide part 10a, 10b ... shall have the same structure. Moreover, in the following,
when showing an arbitrary guidance part among guidance parts 10a, 10b, ..., it shall be written
as guidance part 10.
[0019]
The guiding unit 10 includes an imaging device 11, a directional microphone 12, a directional
speaker 13, and a driving device 14.
[0020]
The imaging device 11 is provided on the ceiling of the office and mainly images the head of a
person in the office.
In the present embodiment, the ceiling height of the office is 2.6 m. That is, the imaging device
11 images a human head or the like from a height of 2.6 m.
[0021]
As shown in FIG. 2, the imaging device 11 includes a wide-angle lens system 32 configured in
three groups, a low pass filter 34, an imaging device 36 formed of a CCD or CMOS, etc., and a
circuit board 38 for driving and controlling the imaging device. Have. Although not shown in FIG.
09-05-2019
6
2, a mechanical shutter (not shown) is provided between the wide-angle lens system 32 and the
low pass filter 34.
[0022]
The wide-angle lens system 32 includes a first group 32a having two negative meniscus lenses, a
second group 32b having a positive lens, a cemented lens, and an infrared cut filter, and a third
group 32c having two cemented lenses. And the aperture 33 is disposed between the second
group 32b and the third group 32c. The wide-angle lens system 32 according to this
embodiment has a focal length of 6.188 mm and a maximum angle of view of 80 °. The wideangle lens system 32 is not limited to the three-group configuration. That is, for example, the
number of lenses, lens configuration, focal length and angle of view of each group can be
changed as appropriate.
[0023]
As an example, the image sensor 36 has a size of 23.7 mm × 15.9 mm, and has 4000 × 3000
pixels (12 million pixels). That is, the size of one pixel is 5.3 μm. However, as the imaging device
36, an imaging device having a size and the number of pixels different from the above may be
used.
[0024]
In the imaging device 11 configured as described above, the light flux incident on the wide-angle
lens system 32 is incident on the imaging element 36 via the low pass filter 34, and the circuit
board 38 converts the output of the imaging element 36 into a digital signal. Then, an image
processing control unit (not shown) including an application specific integrated circuit (ASIC)
performs image processing such as white balance adjustment, sharpness adjustment, gamma
correction and gradation adjustment on the image signal converted into the digital signal. And
image compression such as JPEG. Further, the image processing control unit transmits the JPEGcompressed still image to the control unit 25 (see FIG. 5) of the main body unit 20.
[0025]
09-05-2019
7
The imaging region of the imaging device 11 overlaps (overlaps) the imaging region of the
imaging device 11 included in the adjacent guide unit 10 (see imaging regions P1 to P4 in FIG.
10). This point will be described in detail later.
[0026]
The directional microphone 12 picks up sound incident from a specific direction (for example,
the front direction) with high sensitivity, and may use a superdirective dynamic microphone, a
superdirective condenser microphone, or the like.
[0027]
The directional speaker 13 includes an ultrasonic transducer, and is a speaker that transmits
sound only in a limited direction.
[0028]
The driving device 14 drives the directional microphone 12 and the directional speaker 13
integrally or separately.
[0029]
In the present embodiment, as shown in FIG. 3, the directional microphone 12, the directional
speaker 13, and the drive device 14 are provided in an integrated voice unit 50.
Specifically, the audio unit 50 includes a unit main body 16 that holds the directional
microphone 12 and the directional speaker 13, and a holding unit 17 that holds the unit main
body 16.
The holding portion 17 rotatably holds the unit main body 16 by a rotating shaft 15b extending
in the horizontal direction (X-axis direction in FIG. 3).
The holding unit 17 is provided with a motor 14b constituting the driving device 14. The unit
main body 16 (i.e., the directional microphone 12 and the directional speaker 13) is in the pan
direction (horizontal direction) by the rotational force of the motor 14b. Driven by). Further, the
09-05-2019
8
holding portion 17 is provided with a rotary shaft 15a extending in the vertical direction (Z-axis
direction), and the rotary shaft 15a is fixed by a motor 14a (fixed to the ceiling portion of the
office) constituting the drive device 14. It is rotated. As a result, the unit body 16 (ie, the
directional microphone 12 and the directional speaker 13) is driven in the tilt direction (swinging
in the vertical direction (Z-axis direction)). A DC motor, a voice coil motor, a linear motor or the
like can be used as the motors 14a and 14b.
[0030]
The motor 14a is directional within a range of approximately 60 ° to 80 ° in the clockwise
direction and the counterclockwise direction from the state (−90 °) in which the directional
microphone 12 and the directional speaker 13 are directed downward. It is assumed that the
microphone 12 and the directional speaker 13 can be driven. The driving range is such a range.
When the voice unit 50 is provided on the ceiling of the office, even if a human head may be
directly under the voice unit 50, it is present beside the voice unit 50. It is because it is not
assumed that to do.
[0031]
In the present embodiment, the sound unit 50 and the imaging device 11 of FIG. 1 are separately
provided. However, the present invention is not limited to this. The entire guide unit 10 may be
unitized and provided on the ceiling.
[0032]
Returning to FIG. 1, the card reader 88 is a device provided at, for example, an office entrance
and reading an ID card owned by a person who is permitted to enter the office.
[0033]
The main body unit 20 processes information (data) input from the guide units 10a, 10b,... And
the card reader 88, and centrally controls the guide units 10a, 10b,.
A hardware configuration diagram of the main body unit 20 is shown in FIG.
09-05-2019
9
As shown in FIG. 4, the main body unit 20 includes a CPU 90, a ROM 92, a RAM 94, a storage
unit (here, an HDD (Hard Disk Drive) 96a or a flash memory 96b), an interface unit 97, and the
like. Each component of the main unit 20 is connected to a bus 98. The interface unit 97 is an
interface for connecting to the imaging device 11 and the driving device 14 of the guiding unit
10. As the interface, various connection standards such as wireless / wired LAN, USB, HDMI,
Bluetooth (registered trademark) can be adopted.
[0034]
In the main body unit 20, the CPU 90 executes a program stored in the ROM 92 or the HDD 96a
to realize the functions of the units shown in FIG. That is, in the main body unit 20, when the
CPU 90 executes the program, the functions as the speech recognition unit 22, the speech
synthesis unit 23, and the control unit 25 shown in FIG. 5 are realized. 5, the storage unit 24
realized by the flash memory 96b of FIG. 4 is also illustrated.
[0035]
The voice recognition unit 22 performs voice recognition based on the feature amount of the
voice collected by the directional microphone 12. The speech recognition unit 22 has an acoustic
model and a dictionary function, and performs speech recognition using the acoustic model and
the dictionary function. The acoustic model stores acoustic features such as phonemes and
syllables of a speech recognition speech language. Further, the dictionary function stores
phonological information on the pronunciation of each word to be recognized. The voice
recognition unit 22 may be realized by the CPU 90 executing commercially available voice
recognition software (program). The speech recognition technology is described, for example, in
Japanese Patent No. 4587015 (Japanese Patent Application Laid-Open No. 2004-325560).
[0036]
The voice synthesis unit 23 synthesizes the voice emitted (outputted) by the directional speaker
13. Speech synthesis can be performed by generating phonetic speech segments and connecting
the speech segments. The principle of speech synthesis is that when a consonant is represented
by C (Consonant) and a vowel is represented by V (Vowel), characteristic parameters and speech
segments of basic units such as CV, CVC, VCV, etc. are stored, and pitch and duration Control to
connect and synthesize voice. The speech synthesis technology is described, for example, in
09-05-2019
10
Japanese Patent No. 3727885 (Japanese Patent Laid-Open No. 2003-223180).
[0037]
The control unit 25 controls the entire guidance system 100 in addition to the control of the
main body unit 20. For example, the control unit 25 stores the JPEG-compressed still image
transmitted from the image processing control unit of the imaging device 11 in the storage unit
24. Further, based on the image stored in the storage unit 24, the control unit 25 performs
guidance to a specific person (target person) in the office using any of the directional speakers
13 among the plurality of directional speakers 13. Control the
[0038]
Further, the control unit 25 drives the directional microphone 12 and the directional speaker 13
so that the sound collection range and the sound output range at least overlap the adjacent guide
unit 10 according to the distance to the adjacent guide unit 10. Control. Further, the control unit
25 drives the directional microphone 12 and the directional speaker 13 so that voice guidance
can be performed in a range wider than the imaging range of the imaging device 11, and the
sensitivity of the directional microphone 12 and the directional speaker Set the volume of 13.
This is because the directional microphone 12 and the directional speaker 13 of the guiding unit
10 having an imaging device that does not image the subject may voice-guide the subject.
[0039]
In addition, the control unit 25 acquires card information of the ID card read by the card reader
88, and based on the employee information and the like stored in the storage unit 24, the person
holding the ID card over the card reader 88 is Identify.
[0040]
The storage unit 24 stores a correction table (described later) that corrects a detection error due
to the influence of distortion of the optical system of the imaging device 11, employee
information, an image captured by the imaging device 11, and the like.
[0041]
09-05-2019
11
Next, imaging of the head portion of the subject by the imaging device 11 will be described in
detail.
FIG. 6A is a graph showing the relationship between the distance from the front focal point of the
wide-angle lens system 32 to the head of a person (subject) and the size of the image (head
portion). b) shows a graph obtained by converting the graph of FIG. 6 (a) to the height from the
floor.
[0042]
Here, assuming that the focal length of the wide-angle lens system 32 is 6.188 mm and the
diameter of the subject's head is 200 mm as described above, from the front focus of the wideangle lens system 32 to the position of the subject's head In the case where the distance is 1000
mm (ie, when a person with a height of 1 m 60 cm is upright), the diameter of the subject's head
imaged on the imaging device 36 of the imaging device 11 is 1.238 mm.
On the other hand, when the position of the subject's head falls by 300 mm and the distance
from the front focal point of the wide-angle lens system 32 to the position of the subject's head
becomes 1300 mm, an image is formed on the imaging device of the imaging device 11 The
diameter of the subject's head is 0.952 mm. That is, in this case, when the head height changes
by 300 mm, the size (diameter) of the image changes by 0.286 mm (23.1%).
[0043]
Similarly, when the distance from the front focal point of the wide-angle lens system 32 to the
position of the subject's head is 2000 mm (when the subject is intermediate waist), the head of
the subject who forms an image on the imaging device 36 of the imaging device 11 The diameter
of the object is 0.619 mm, and when the position of the subject's head falls 300 mm from there,
the size of the image of the subject's head imaged on the imaging device of the imaging device
11 is 0.538 mm. . That is, in this case, when the head height changes by 300 mm, the size
(diameter) of the image of the head changes by 0.081 mm (13.1%). As described above, in the
present embodiment, as the distance from the front focal point of the wide-angle lens system 32
to the subject's head increases, the change (rate of change) in the size of the head image
decreases.
09-05-2019
12
[0044]
Generally, for adults, the difference in height is about 300 mm, and the difference in head size is
an order of magnitude smaller than the difference in height, but the difference between height
and head size satisfies a predetermined relationship Tend to Therefore, the height of the subject
can be analogized by comparing the standard head size (for example, the diameter of 200 mm)
with the size of the imaged subject's head. Also, in general, since the position of the ear is about
150 mm to 200 mm below the top of the head, the height position of the subject's ear can also
be analogized from the size of the head. There are many cases where you stand when you enter
the office, so if you capture the image of the head by the imaging device 11 provided near the
reception and estimate the height of the subject and the height of the ears, Since the distance
from the front focus of the wide-angle lens system to the subject is known from the size of the
image of the head of the subject, the subject's posture (standing, standing back and forth, falling)
and changes in posture are It can be determined in a state where privacy is maintained. When
the subject is falling, it can be inferred that the position of the ear is about 150 to 200 mm from
the top of the head toward the direction of the foot. Thus, by using the position and size of the
head imaged by the imaging device 11, it becomes possible to analogize the position of the ear
even if the ear is hidden by hair, for example.
[0045]
FIG. 7 is a graph showing the rate of change of the size of the head image. FIG. 7 shows the rate
of change of the image size when the position of the subject's head changes 100 mm from the
value shown on the horizontal axis. As can be seen from FIG. 7, when the distance from the front
focal point of the wide-angle lens system 32 to the position of the subject's head is 1000 mm to
100 mm away, the rate of change of the image size is as large as 9.1%. Even if the head size is the
same, if the height difference is about 100 mm, it is possible to easily identify a plurality of
subjects based on the height difference. On the other hand, when the distance from the front
focal point of the wide-angle lens system 32 to the position of the subject's head is apart from
2000 mm to 100 mm, the rate of change in image size is 4.8%. In this case, although the rate of
change in the image is smaller than when the distance from the front focal point of the wideangle lens system 32 to the position of the subject's head moves away from 1000 mm to 100
mm, the change in posture of the same subject If it is an extent, it can be easily identified.
[0046]
09-05-2019
13
As described above, if the imaging result of the imaging device 11 according to the present
embodiment is used, the distance from the front focus of the wide-angle lens system 32 to the
subject can be detected from the size of the image of the subject's head. The unit 25 can use this
detection result to determine changes in the posture of the subject (upright, mid-high waist,
falling down) and posture. This point will be described in more detail based on FIGS. 8 (a) and 8
(b).
[0047]
8 (a) and 8 (b) are diagrams schematically showing changes in the size of the head image
according to the posture of the subject. As shown in FIG. 8B, when the imaging device 11 is
provided on the ceiling and the head of the subject is imaged, the figure is erected as in the case
of the subject on the left side of FIG. 8B. In the case where the head is imaged large as shown in
8 (a) and it falls down as in the case of the right person in FIG. 8 (b), the head is imaged small as
shown in FIG. 8 (a). Also, as in the case of the middle subject in FIG. 8B, when in the middle / low
waist state, the image of the head is smaller than when standing and larger than when falling.
Therefore, in the present embodiment, the control unit 25 can determine the state of the subject
by detecting the size of the image of the head of the subject based on the image transmitted from
the imaging device 11. . In this case, since the posture of the subject and the change in the
posture are determined from the image of the head of the subject, privacy is protected compared
to the case where the determination is performed using the face or the entire body of the subject.
Can.
[0048]
6 (a), 6 (b) and 7 show graphs in the case where the subject is present at a position where the
angle of view of the wide angle lens system 32 is low (directly below the wide angle lens system
32). ing. That is, when the subject is present at the position of the peripheral angle of view of the
wide-angle lens system 32, there is a possibility that the subject may be affected by distortion
according to the visual angle with the subject. This will be described in detail.
[0049]
09-05-2019
14
FIG. 9 shows a change in the size of the image of the subject's head imaged by the imaging device
36 according to the position of the subject. The center of the imaging device 36 is assumed to
coincide with the center of the optical axis of the wide-angle lens system 32. In this case, even
when the subject is upright, the imaging apparatus 11 is affected by distortion in the case of
standing directly below the imaging apparatus 11 and in the case of standing away from the
imaging apparatus 11. The size of the image of the head being imaged changes. Here, when the
head is imaged at the position p1 of FIG. 9, according to the imaging result, the size of the image
imaged by the imaging device 36, the distance L1 from the center of the imaging device 36, the
center of the imaging device 36 The angle θ1 from can be obtained. Further, when the head is
imaged at the position P2 in FIG. 9, from the imaging result, the size of the image imaged by the
imaging device 36, the distance L2 from the center of the imaging device 36, and the center of
the imaging device 36 Can be obtained. The distances L1 and L2 are parameters representing the
distance between the front focal point of the wide-angle lens system 32 and the subject's head.
Further, the angles θ1 and θ2 from the center of the imaging element 36 are parameters
representing the expected angle of the wide-angle lens system 32 with respect to the subject. In
such a case, the control unit 25 corrects the size of the captured image based on the distances L1
and L2 from the center of the imaging device 36 and the angles θ1 and θ2 from the center of
the imaging device 36. In other words, when the subject is in the same posture, the size of the
image captured at the position p1 of the imaging device 36 and the size of the image captured at
the position p2 are corrected to be substantially equal. By doing this, in the present embodiment,
the posture of the subject is accurately detected regardless of the positional relationship between
the imaging device 11 and the subject (the distance to the subject or the prospective angle with
the subject). Can. The parameter (correction table) used for this correction is assumed to be
stored in the storage unit 24.
[0050]
Here, it is assumed that the control unit 25 sets an imaging interval by the imaging device 11.
The control unit 25 can change the frequency of photographing (frame rate) in a time zone in
which there is a high possibility that many people are present in the office and in other time
zones. For example, if the control unit 25 determines that the current time is a time zone in
which there is a high possibility that many people are in the office (for example, from 9 am to 6
pm), the still image is generated once a second Is taken (32,400 sheets / day), and if it is
determined that the time zone is other than that, the still image is taken once every 5 seconds
(6,480 sheets / day), etc. can do. In addition, after the captured still image is temporarily stored
in the storage unit 24 (flash memory 96 b), for example, imaging data for each day may be
stored in the HDD 96 a and then deleted from the storage unit 24. Good.
09-05-2019
15
[0051]
In addition, it may replace with a still picture and may image ¦ photograph a moving image, and
in this case, a moving image may be image ¦ photographed continuously or a short moving image
of about 3 to 5 seconds may be intermittently image ¦ photographed.
[0052]
Next, the imaging area of the imaging device 11 will be described.
[0053]
FIG. 10 is a diagram schematically showing the relationship between one section 43 in the office
and the imaging area of the imaging device 11 provided in the section 43 as an example.
In FIG. 10, it is assumed that four imaging devices 11 (only imaging regions P1, P2, P3, and P4
are illustrated) are provided in one section 43.
In addition, it is assumed that one section is 256 m <2> (16 m × 16 m). Furthermore, each of the
imaging regions P1 to P4 is assumed to be a circular region, and is in a state of overlapping with
an imaging region adjacent in the X direction and the Y direction. In FIG. 10, for convenience of
explanation, divided portions (corresponding to the imaging regions P1 to P4) obtained by
dividing one section into four are shown as divided portions A1 to A4. In this case, assuming that
the angle of view of the wide-angle lens system 32 is 80 °, the focal length is 6.188 mm, the
height of the ceiling is 2.6 m, and the height of the subject is 1.6 m An inside of a circle having a
radius of 5.67 m (about 100 m <2>) is an imaging region. That is, since the divided portions A1
to A4 are 64m <2>, the divided portions A1 to A4 can be included in the imaging regions P1 to
P4 of each imaging device 11, and one of the imaging regions of each imaging device 11 It is
possible to duplicate parts.
[0054]
FIG. 10 shows the concept of overlap of the imaging areas P1 to P4 viewed from the object side,
but the imaging areas P1 to P4 are areas where light is incident on the wide angle lens system
32, and this wide angle lens system 32 All of the light incident on the light source does not enter
the rectangular imaging element 36. For this reason, in the present embodiment, the imaging
09-05-2019
16
device 11 may be installed in the office so that the imaging regions P1 to P4 of the plurality of
adjacent imaging elements 36 overlap. Specifically, an adjustment unit (for example, an elongated
hole, a large adjustment hole, or a shift optical system for adjusting an imaging position) for
adjusting the attachment of the imaging device 11 is provided, and an image captured by each
imaging element 36 The overlap position may be adjusted while confirming with the eye, and the
attachment position of each imaging device 11 may be determined. If, for example, the divided
portion A1 shown in FIG. 10 matches the image pickup area of the image pickup device 36, the
images picked up by the respective image pickup devices 11 will fit exactly without overlapping. .
However, considering the degrees of freedom when attaching the plurality of imaging devices 11
and the case where the attachment height differs due to the beams of the ceiling, etc., the
imaging areas P1 to P4 of the plurality of imaging elements 36 overlap as described above. It is
preferable to make it wrap.
[0055]
The overlapping amount can be set based on the size of the head of a person. In this case, for
example, if the outer periphery of the head is 60 cm, a circular area of about 20 cm in diameter
may be included in the overlapping region. Note that, under the setting that it is preferable that a
part of the head be included in the overlapping area, for example, a circle having a diameter of
about 10 cm may be included. If the overlapping amount is set to this extent, the adjustment
when attaching the imaging device 11 to the ceiling is facilitated, and in some cases, it is also
possible to overlap the imaging regions of the plurality of imaging devices 11 without
adjustment.
[0056]
Next, based on FIG. 11 to FIG. 13, tracking processing of a target person using the guiding unit
10 (the imaging device 11) will be described. FIG. 11 schematically shows how the target person
enters the office.
[0057]
First, processing when a target person enters an office will be described using FIG. As shown in
FIG. 11, when the target person enters the office, the target person holds the ID card 89 owned
by the user over the card reader 88. The card information acquired by the card reader 88 is
09-05-2019
17
transmitted to the control unit 25. The control unit 25 specifies a target person holding the ID
card 89 based on the acquired card information and the employee information stored in the
storage unit 24. If the target person is other than an employee, the target person is identified as a
guest because the guest card passed over at the general reception desk or at a security guard or
the like is held.
[0058]
From the time when the subject is identified as described above, the control unit 25 performs
imaging of the head of the subject using the imaging device 11 of the guiding unit 10 provided
above the card reader 88. Then, the control unit 25 cuts out an image portion assumed to be a
head as a reference template from the image captured by the imaging device 11 and registers the
portion in the storage unit 24.
[0059]
In addition, as a method of extracting the image part assumed to be a head out of the image
imaged by the imaging device 11, for example, (1) A template of the image of the head of a
plurality of subjects is registered in advance, A method of extracting a head portion by pattern
matching using these images (2) There is a method of extracting a circular portion of an expected
size as a head portion.
[0060]
In addition, before extraction of the above-mentioned head part, a subject person is imaged from
the front using a camera installed near the card reader, and it is predicted which part of the
imaging area of the imaging device 11 the head will be imaged You may leave it.
In this case, the position of the subject's head may be predicted from the face authentication
result of the image of the camera, or the position of the subject's head may be predicted by using,
for example, a stereo camera as the camera. By doing this, it is possible to extract the head part
with high accuracy.
[0061]
09-05-2019
18
Here, it is assumed that the height of the subject is registered in the storage unit 24 in advance,
and the control unit 25 associates the height with the reference template. When the target
person is a guest, the height is measured by a camera or the like that captures an image of the
target person from the front, and the height is associated with the reference template.
[0062]
Further, the control unit 25 creates a template (synthetic template) in which the scaling factor of
the reference template is changed, and stores the template in the storage unit 24. In this case, the
control unit 25 creates a template of a head size to be imaged by the imaging device 11 when the
head height changes in units of 10 cm, for example, as a composite template. When creating the
composite template, the control unit 25 takes into consideration the relationship between the
optical characteristics of the imaging device 11 and the imaging position when the reference
template is acquired.
[0063]
Next, tracking processing by a single imaging device 11 immediately after entering the office will
be described using FIG. After the target person enters the office, the control unit 25 starts
continuous acquisition of images by the imaging device 11, as shown in FIG. Then, the control
unit 25 performs pattern matching between the continuously acquired image and the reference
template (or composite template), extracts a portion (head portion) where the score value is
higher than a predetermined reference value, and the extraction The position (height position
and two-dimensional position in the floor surface) of the object person is obtained from the
portion thus obtained. In this case, it is assumed that the score value becomes higher than a
predetermined reference value when the image α in FIG. 12 is acquired. Therefore, the control
unit 25 sets the position of the image α in FIG. 12 as the position of the subject, sets the image
α as a new reference template, and creates a composite template of the new reference template.
[0064]
Thereafter, the control unit 25 uses the new reference template (or composite template) to track
the subject's head, and whenever the subject's position changes, an image obtained at that time
(for example, FIG. 12). The image β) of is used as a new reference template and a composite
09-05-2019
19
template is created (the reference template and the composite template are updated). When
tracking is performed as described above, the size of the head may be suddenly reduced. That is,
the magnification of the synthesis template used for pattern matching may greatly fluctuate. In
such a case, the control unit 25 may determine that an abnormality such as a fall of the subject
has occurred.
[0065]
Next, based on FIG. 13, connection processing (modification processing of a reference template
and a combination template) between two imaging devices 11 will be described.
[0066]
As a premise, as shown in FIG. 13, when the object person is located between two imaging
devices 11 (overlapping portion of the imaging area described above), the control unit 25 selects
one imaging device 11 (on the left side). It is assumed that the position of the subject's head is
detected.
It is assumed that the reference template at this time is the image β of FIG. In this case, the
control unit 25 calculates at which position of the imaging area of the other (right side) imaging
device 11 the head is imaged based on the position of the subject's head. Then, the control unit
25 sets, as a new reference template, an image (image γ in FIG. 13) of a position where the head
should be imaged in the imaging region of the other (right side) imaging device 11 Generate
Then, in the tracking process using the imaging device 11 on the right side after this, the
tracking process as shown in FIG. 12 is performed while updating the reference template (image
γ).
[0067]
By performing the above-described processing, it is possible to perform tracking processing of
the target person in the office by updating the reference template as needed.
[0068]
Next, tracking processing in the case where four target persons (target persons A, B, C, and D)
move in one section 43 of FIG. 10 will be described based on FIGS. 14 and 15.
09-05-2019
20
During the tracking process, the control unit 25 updates the reference template as needed as
shown in FIGS. 12 and 13.
[0069]
The state at time T1 is shown in FIG. 14 (a). In addition, the state in time T1 or after (time T2-T5)
is shown by FIG.14 (b)-FIG.15 (c).
[0070]
At time T1, the target person C exists in the divided portion A1, and the target persons A and B
exist in the divided portion A3. In this case, the imaging device 11 having the imaging area P1
images the head of the subject C, and the imaging device 11 having the imaging area P3 images
the heads of the subjects A and B.
[0071]
Next, at time T2, the imaging device 11 having the imaging area P1 images the heads of the
subjects B and C, and the imaging device 11 having the imaging area P3 images the heads of the
subjects A and B.
[0072]
In this case, the control unit 25 moves the target persons A and C in the left and right direction
of FIG. 14B based on the imaging results of the imaging devices 11 at time T1 and T2, and the
target person B Recognize that you are moving up and down.
The reason why the subject B is imaged by the two imaging devices 11 at time T2 is that the
subject B exists in a portion where the imaging regions of the two imaging devices 11 overlap. In
the state of FIG. 14B, the control unit 25 performs, for the target person B, the connection
process of FIG. 13 (a process of changing the reference template and the composite template
between the two imaging devices 11).
09-05-2019
21
[0073]
Next, at time T3, the imaging device 11 having the imaging area P1 images the heads of the
subjects B and C, and the imaging device 11 having the imaging area P2 images the head of the
subject C and has the imaging area P3. The imaging device 11 images the head of the subject
person A, and the imaging device 11 having the imaging area P4 images the heads of the
subjects A and D.
[0074]
In this case, at time T3 (FIG. 15A), the control unit 25 is that the target person A is at the
boundary between the divided portion A3 and the divided portion A4 (moving from the divided
portion A3 to the divided portion A4). Recognize that the target person B is in the divided portion
A1, and recognizes that the target person C is at the boundary between the divided portion A1
and the divided portion A2 (moving from the divided portion A1 to A2). , And recognize that the
target person D is in the divided part A4.
In the state of FIG. 15A, the control unit 25 performs the connection process of FIG. 13 (the
process of changing the reference template and the composite template between the two imaging
devices 11) for the subjects A and C.
[0075]
Similarly, at time T4 (FIG. 15 (b)), the control unit 25 divides the target person A into divided
parts A4, the target person B into divided parts A1, the target person C into divided parts A2, and
the target person D into divided parts A2. Recognize that you are between A and A4. In the state
of FIG. 15B, the control unit 25 performs, for the target person D, the connection process of FIG.
13 (a process of changing the reference template and the composite template between the two
imaging devices 11). At time T5 (FIG. 15 (c)), the control unit 25 divides the target person A into
a divided portion A4, the target person B into a divided portion A1, the target person C into a
divided portion A2, and the target person D into a divided portion A2. Recognize
[0076]
09-05-2019
22
In the present embodiment, since a part of the imaging regions of the plurality of imaging
devices 11 is overlapped as described above, the control unit 25 can recognize the position and
the moving direction of the object person. As described above, in the present embodiment, the
control unit 25 can continuously track each target person in the office with high accuracy.
[0077]
Next, a control method of the directional speaker 13 by the control unit 25 will be described
based on FIG. Note that FIG. 16 illustrates the case where the guide unit 10 is disposed along the
passage (hallway), and the area indicated by the one-dot chain line means the imaging range of
the imaging device 11 that each guide unit 10 has. I assume. Also in the case of FIG. 16, it is
assumed that the imaging ranges of the adjacent imaging devices 11 overlap.
[0078]
In the present embodiment, when the target person moves from the position K1 to the position
K4 (+ X direction) as shown in FIG. 16, if the target person is located at the position K1, the
control unit 25 guides the guide 10a. Voice guidance to the subject using the directional speaker
13 (see thick solid-line arrows extending from the guiding unit 10a).
[0079]
On the other hand, when the target person is located at the position K2, the control unit 25 is not
the guide unit 10a having the imaging device 11 that captures an image of the target person (see
thick dashed arrow extending from the guide unit 10a) Using the directional speaker 13 of the
guiding unit 10b having an imaging device 11 that does not image the subject, voice guidance is
provided to the subject (see thick solid-line arrows extending from the guiding unit 10b).
[0080]
Such control of the directional speaker 13 is performed when the control unit 25 performs voice
guidance from the directional speaker 13 of the guiding unit 10a when the target person moves
in the + X direction. If voice guidance is performed from behind the ear of the person while the
control unit 25 performs voice guidance by controlling the posture of the directional speaker 13
of the guiding unit 10b, voice guidance is performed from the front side of the subject's ear
Because you can
09-05-2019
23
That is, when the subject is moving in the + X direction, voice guidance can be given from the
front of the face of the subject by selecting the directional speaker 13 located in the + X direction
more than the subject. .
The control unit 25 may select the directional speaker 13 so as to perform voice guidance from
the side of the target person. That is, the control unit 25 may select the directional speaker 13 so
as to avoid voice guidance from behind the subject's ear.
[0081]
Further, when the target person is located at the position K4, the control unit 25 performs voice
guidance to the target person using the directional speaker 13 of the guiding unit 10d. Such
directional speaker 13 is controlled when voice guidance is given to the target person using the
directional speaker 13 of the guiding unit 10c at the position K4 (see thick dashed arrow
extending from the guiding unit 10c). This is because there is a risk that another person near the
target person may hear the voice guidance.
[0082]
In the present embodiment, as described above, the control unit 25 selects the directional
speaker 13 without fear that another person may hear the voice guidance based on the imaging
result of the at least one imaging device 11. It is also assumed that the target person inquires via
the directional microphone 12 even if another person is nearby as in the position K4. In such a
case, the words emitted by the subject are collected using the directional microphone 12 (the
directional microphone 12 present at a position closest to the subject) of the guiding unit 10c
that images the subject. Good. However, the present invention is not limited to this, and the
control unit 25 may collect words uttered by the subject using the directional microphone 12
located on the front side of the subject's mouth.
[0083]
Each guide unit 10 may start driving (turn on the power) as necessary. For example, the guiding
09-05-2019
24
unit 10b adjacent to the guiding unit 10a may be driven at a stage where it is found that the
guiding unit 10a captures an outsider and moves to the + X side in FIG. In this case, the guidance
unit 10b may start driving before the extraneous person comes to the overlapping portion of the
imaging range of the imaging device 11 of the guiding unit 10a and the imaging range of the
imaging device 11 of the guiding unit 10b. In addition, the guiding unit 10a may turn off the
power or enter the energy saving mode (standby mode) when it is not possible to image the
foreigner.
[0084]
In the sound unit 50 shown in FIG. 2, a drive mechanism may be provided which can drive the
unit body 16 in the X-axis direction or the Y-axis direction. In this case, the position of the
directional speaker 13 is changed so that voice can be output from the front side (or side) of the
target person via the drive mechanism, or another voice of the directional speaker 13 is not
heard by another person. If the position is changed, the number of directional speakers 13 (audio
units 50) can be reduced.
[0085]
In addition, although the guide part 10 arrange ¦ positioned along one axial direction (X-axis
direction) was illustrated in FIG. 16, in addition to this, even if arrange ¦ positioning the guide
part 10 along Y-axis direction, the same control is carried out. can do.
[0086]
Next, processing and operation of the guidance system 100 of the present embodiment will be
described in detail based on FIG.
FIG. 17 is a flowchart showing the guidance process for the subject by the control unit 25. In the
present embodiment, guidance processing when an outpatient (target person) arrives at an office
will be described as an example.
[0087]
09-05-2019
25
In the process of FIG. 17, first, in step S10, the control unit 25 performs a reception process.
Specifically, when the foreigner comes to the reception (see FIG. 11), the control unit 25 captures
an image of the head of the foreigner by the imaging device 11 of the guiding unit 10 provided
on the ceiling near the reception, Generate reference templates and composite templates.
Further, the control unit 25 recognizes the area in which the entry and exit of the foreigner are
permitted from the information registered in advance, and notifies the place of the meeting from
the directional speaker 13 of the guiding unit 10 near the reception. In this case, for example, the
control unit 25 causes the voice synthesis unit 23 to perform voice synthesis of voice guidance
such as Because the person in charge of ○○ is waiting in the 5th reception room, please
proceed through the corridor . Are output from the directional speaker 13.
[0088]
Next, in step S12, as described with reference to FIGS. 12 to 15, the control unit 25 follows the
tracking of the visitor by imaging the head of the visitor using the imaging devices 11 of the
plurality of guiding units 10. I do. In this case, the reference template is updated as needed, and a
composite template is also created as needed.
[0089]
Next, in step S14, the control unit 25 determines whether the outpatient has accepted the
reception. If the determination here is affirmed, the entire process of FIG. 17 is ended, but if the
determination is denied, the process proceeds to step S16.
[0090]
Next, in step S16, it is determined whether guidance for the outpatient is required. In this case,
the control unit 25 may, for example, when approaching a branch (such as a position at which
the outpatient needs to move to the right) existing while the outpatient goes to the fifth reception
room, Judge that guidance is necessary. Further, the control unit 25 determines that the
guidance is necessary, for example, when the outpatient directs the directional microphone 12 of
the guiding unit 10, such as "Where is the toilet?" Further, the control unit 25 also determines
that the guidance is necessary, for example, when the outpatient stops for a predetermined time
(for example, about 3 seconds to 10 seconds).
09-05-2019
26
[0091]
Next, in step S18, the control unit 25 determines whether guidance is required. If the
determination in step S18 is negative, the process returns to step S14, but if the determination in
step S18 is affirmative, the process proceeds to step S20.
[0092]
When the process proceeds to step S20, the control unit 25 checks the traveling direction of the
foreigner based on the imaging result of the imaging device 11, and estimates the position of the
ear (position of the front of the face). The position of the ear can be inferred from the height
associated with the person (subject) identified at the reception. If the height is not associated
with the subject, the position of the ear is determined based on the size of the head imaged at the
reception and the height of the image of the target taken from the front at the reception. You
may analogize.
[0093]
Next, in step S22, the control unit 25 selects the directional speaker 13 that outputs sound based
on the position of the foreigner. In this case, as described with reference to FIG. 16, the control
unit 25 is a directional speaker positioned in front of or beside the subject's ear and in a direction
in which there is no possibility that another person near the subject may hear audio guidance.
Choose 13.
[0094]
Next, in step S24, the control unit 25 adjusts the positions of the directional microphone 12 and
the directional speaker 13 by the drive device 14 and sets the volume (output) of the directional
speaker 13. In this case, the control unit 25 detects the distance between the outsider and the
directional speaker 13 of the guiding unit 10b based on the imaging result of the imaging device
11 of the guiding unit 10a, and the directional speaker 13 based on the detected distance. Set the
volume of In addition, when the control unit 25 determines that the outpatient is going straight
based on the imaging result of the imaging device 11, the tilt direction of the directional
09-05-2019
27
microphone 12 and the directional speaker 13 by the motor 14a (see FIG. 3) Adjust the position
of. Furthermore, when the control unit 25 determines that the extraneous person has bent the
corridor based on the imaging result of the imaging device 11, the motor 14b (see FIG. 3) detects
the pan direction of the directional microphone 12 and the directional speaker 13. Adjust the
position.
[0095]
Next, in step S26, the control unit 25 performs guidance or warning on the outpatient in the
adjusted state of step S24. Specifically, for example, when a foreigner reaches a fork that should
turn to the right, voice guidance such as "Please turn to the right" is performed. Further, for
example, when the outpatient emits a voice such as "Where is the toilet?", The control unit 25
causes the voice recognition unit 22 to recognize the voice input from the directional
microphone 12, and the outpatient performs the speech. The voice synthesizing unit 23
synthesizes a voice for guiding the position of the nearest toilet from the area permitted to enter
and exit. Then, the control unit 25 outputs the voice synthesized by the voice synthesis unit 23
from the directional speaker 13. Further, for example, when the foreigner enters (or is likely to
enter) an area (security area) in which the intrusion of the foreigner is not permitted, the control
unit 25 Give voice guidance (warning) such as "Please refrain from entering the area". In the
present embodiment, since the directional speaker 13 is employed, by performing voice guidance
using the directional speaker 13, voice guidance can be appropriately performed only to a person
who needs voice guidance.
[0096]
As described above, after the process of step S26 is completed, the process returns to step S14,
and the subsequent processes are continuously performed until the foreigner accepts the
reception. As a result, even when an outpatient comes to the office, it is possible to omit the time
and effort for the person to guide, and to prevent the outpatient from entering the security area
or the like. In addition, since it is not necessary for the outpatient to have a sensor, the outpatient
does not feel bothersome.
[0097]
As described above in detail, according to the present embodiment, the control unit 25 acquires
09-05-2019
28
an imaging result from at least one imaging device 11 capable of capturing an image including a
target person, and according to the acquired imaging result. The directional speaker 13 provided
outside the imaging range of the imaging device 11 is controlled. As a result, when voice is
output from the directional speaker 13 provided in the imaging range of the imaging device 11,
the voice is emitted from the back side of the subject's ear, and even if the subject is difficult to
hear, imaging is performed. By outputting the sound from the directional speaker 13 provided
out of the range, the target person can easily hear the sound emitted from the directional
speaker. In addition, when there is another person near the target person and there is a
possibility that the other person may hear the sound, the other person can hear the sound by
outputting the sound from the directional speaker 13 provided outside the imaging range. Can be
suppressed. That is, appropriate control of the directional speaker 13 is possible.
[0098]
Further, according to the present embodiment, the control unit 25 detects movement information
(such as position) of the subject based on the imaging result of the at least one imaging device
11, and the directional speaker 13 is detected based on the detection result. Since the control is
performed, it is possible to control the directional speaker 13 appropriately according to the
movement information (such as the position) of the object person.
[0099]
Further, according to the present embodiment, when the control unit 25 determines that the
target person moves out of the predetermined area (outside the security area) based on the
movement information of the target person, or in the outside of the predetermined area (outside
the security area) When it is determined that the user has moved, a warning is given to the
subject from the directional speaker 13.
This makes it possible to prevent the target person from invading outside the security area
without human intervention.
[0100]
Further, according to the present embodiment, the control unit 25 controls the directional
speaker 13 when the imaging device 11 captures an image of a person different from the target
person, so the person different from the target person (others) It is possible to properly control
the directional speaker so that no sound is heard by the speaker.
09-05-2019
29
[0101]
Further, according to the present embodiment, since the drive device 14 adjusts the position and
/ or posture of the directional speaker 13, the voice output direction of the directional speaker
13 is appropriately oriented (the direction in which the target person can easily hear the voice)
Can be adjusted.
[0102]
Further, according to the present embodiment, the drive device 14 adjusts the position and / or
the posture of the directional speaker 13 according to the movement of the target person, so that
even if the target person moves, the audio output of the directional speaker 13 The direction can
be adjusted to the appropriate direction.
[0103]
Further, according to the present embodiment, since the adjacent imaging devices 11 are
disposed so that the imaging regions of the adjacent imaging devices 11 overlap, the target
person moves across the imaging regions of the adjacent imaging devices 11 Even in this case, it
is possible to track the subject using the adjacent imaging device 11.
[0104]
Further, according to the present embodiment, the control unit 25 uses the image of the head
portion captured by the imaging device 11 as a reference template, and when tracking the target
person, specifies the head portion of the target person using the reference template. And update
the reference template with a new image of the identified head.
Therefore, the control unit 25 can appropriately track the moving subject by updating the
reference template even when the image of the head changes.
[0105]
Further, according to the present embodiment, the control unit 25 acquires position information
of the head portion of the target person imaged by one imaging device when the plurality of
imaging devices can simultaneously image the target person, and Of the images captured by the
09-05-2019
30
imaging device, an image of a region where the head portion is present is used as a reference
template of another imaging device.
Therefore, even if the images of the head portion acquired by one imaging device and another
imaging device are different (for example, in the case of the image β of the back of the head and
the image γ of the forehead), the reference template is determined as described above. This
makes it possible to appropriately track the subject using a plurality of imaging devices.
[0106]
Further, according to the present embodiment, the control unit 25 determines the abnormality of
the object person when the size information of the head part changes by a predetermined
amount or more, and thus the abnormality of the object person is discovered while protecting
privacy. can do.
[0107]
Further, according to the present embodiment, the control unit 25 acquires the imaging result of
the imaging device 11 capable of imaging an image including the target person, and the size
information of the target person (ear position and height, Since the position and / or posture of
the directional speaker 13 is adjusted based on the result of detecting the distance from the
imaging device 11, etc., the position and posture of the directional speaker 13 can be
appropriately adjusted.
As a result, it is possible to make it easy to hear the sound output from the directional speaker 13
to the target person.
[0108]
Further, according to the present embodiment, the control unit 25 sets the output (volume) of the
directional speaker based on the distance between the target person and the imaging device 11,
so the directional speaker 13 outputs the output to the target person. Can make it easy to hear
the
[0109]
Further, according to the present embodiment, the control unit 25 performs voice guidance by
09-05-2019
31
the directional speaker 13 according to the position of the object person, so that the position of
the object person is a branch or in a security area or It is possible to provide appropriate voice
guidance (or warning), for example, in the vicinity.
[0110]
Further, according to the present embodiment, since the control unit 25 corrects the size
information of the target person based on the positional relationship between the target person
and the imaging device 11, detection due to the influence of distortion of the optical system of
the imaging device 11 The occurrence of an error can be suppressed.
[0111]
In the above embodiment, although the head portion of the subject is imaged by the imaging
device 11, the present invention is not limited to this, and the shoulder of the subject may be
imaged.
In this case, the position of the ear may be analogized from the height of the mold.
[0112]
In the above embodiment, although the case where the directional microphone 12 and the
directional speaker 13 are unitized has been described, the present invention is not limited to
this, and the directional microphone 12 and the directional speaker 13 may be separately
provided. .
Also, instead of the directional microphone 12, a microphone without directivity (for example, a
zoom microphone) may be adopted, and instead of the directional speaker 13, a speaker without
directivity may be adopted.
[0113]
Further, in the above embodiment, the case where the guidance system 100 is deployed in the
09-05-2019
32
office and the guidance processing is performed when an outpatient comes to the office has been
described, but the present invention is not limited to this.
For example, the guidance system 100 may be deployed in a sales department such as a
supermarket or a department store, and the guidance system 100 may be used for guiding a
customer to the sales department or the like.
Similarly, the guidance system 100 may be deployed to a hospital or the like.
In this case, the guidance system 100 may be used to guide the patient. For example, in the case
of conducting a plurality of examinations in a medical checkup or the like, it is possible to guide
the target person, and to improve the efficiency of the diagnostic work, the settlement work, and
the like. Furthermore, it is possible to use the guidance system 100 for guidance in places where
quietness is required, such as museums, movie theaters, concert halls and the like. In addition, it
is possible to protect the personal information of the target person, because there is no fear that
another person may hear the voice guidance. When a staff member is present at a location where
the guidance system 100 is deployed, voice guidance is given to a target person who needs
guidance, and the staff member is notified that there is a target person who needs guidance. You
may do it.
[0114]
In the above embodiment, the card reader 88 is provided at the reception of the office to identify
the person who is about to enter the office. However, the present invention is not limited to this,
and biometric authentication devices such as fingerprints and voices, A person may be identified
by a password input device or the like.
[0115]
The embodiments described above are examples of preferred implementations of the invention.
However, the present invention is not limited to this, and various modifications can be made
without departing from the scope of the present invention.
09-05-2019
33
[0116]
11 imaging device 12 directional microphone 13 directional speaker 20 main unit 25 control
unit 100 guidance system
09-05-2019
34
1/--страниц
Пожаловаться на содержимое документа