close

Вход

Забыли?

вход по аккаунту

JP2013115761

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2013115761
Abstract: When there is a plurality of image display areas on a screen, an operator of the remote
control is specified when the remote control is operated, and the contents of the remote control
operation are reflected on the screen corresponding to the specified operator. Providing an
image display device etc. SOLUTION: In an image display apparatus, a correspondence storage
unit for storing a correspondence relationship between a display area and an audio output
apparatus to which a voice attached to an image corresponding to each of the display areas is to
be output; A device identification unit for identifying an audio output device located closest to the
position of the signal source based on the position of the output device and the position of the
signal source identified by the second position identifier, and the device identification A display
area corresponding to the audio output device specified by the unit is extracted from the
correspondence storage unit, and control corresponding to the signal received by the signal
receiving unit for an image or a sound corresponding to the extracted display region is
performed. And a control unit. [Selected figure] Figure 1
IMAGE DISPLAY DEVICE, IMAGE DISPLAY SYSTEM, AND TELEVISION RECEIVER
[0001]
The present invention sets a plurality of display areas on a display screen, displays an image on
each of the plurality of display areas, and is referred to as a remote controller (hereinafter
referred to as "remote control"). The present invention relates to an image display apparatus that
applies control based on control signals from the above to an appropriate display area, an image
display system including the image display apparatus, and a television receiver.
09-05-2019
1
[0002]
At present, with the increase in definition of an image display device, there is an increasing
demand for multi-screen display, that is, setting a plurality of areas on one screen and displaying
different images in each area.
[0003]
Conventionally, when a plurality of people are watching different images in multi-screen display,
when trying to switch inputs or switch volumes, only the operation target display area specified
by the remote control can be directly operated.
In order for a person who is viewing a display area other than the specified operation target
display area to perform an operation on the display area, an operation of designating the display
area that he / she is viewing with the remote control as the operation target display area is
required. Become. Furthermore, even if the display area viewed by oneself is not the operation
target display area, when the operation is performed, the display area (the operation target
display area) viewed by another person is operated. It becomes.
[0004]
In order to solve such a problem, a configuration may be considered in which the remote
controllers are provided as many as the number of display areas. However, as the number of
remote controls increases, the types of remote control signals to be determined by the image
display device increase. In addition, it is conceivable that the correspondence between the remote
control and the display area is not easy to understand, and it is possible to erroneously operate a
different display area, and the above problem is not necessarily solved.
[0005]
On the other hand, Patent Document 1 describes a system in which a weak current capable of
identifying a plurality of headphones from one another is passed from the headphones to the
remote control via the human body. The remote control that has received the weak current
transmits its own identification information and the identification information of the headphones
09-05-2019
2
to the image display device. The image display apparatus having received the two pieces of
identification information generates control information for changing the setting to be used by
the combination of the headphone and the remote controller indicated by the identification
information, and transmits the control information to the electronic device. By performing the
setting change in the electronic device, the headphone, the remote control, and the image display
device are subsequently associated in the electronic device, and the operation by the remote
control is reflected on the appropriate image display device or headphones.
[0006]
Unexamined-Japanese-Patent No. 2006-14199
[0007]
In the system described in Patent Document 1, since the image display device, the remote
control, and the headphones are associated with each other, control of the image display device
not intended by the user by the remote control operation does not occur.
However, after the image display device, the remote control, and the headphones are associated,
the remote control can be operated only for the specific image display device, and the operation
can not be performed for the other image display devices. Therefore, the remote control of the
number of image display devices or headphones is required, and the number of remote control
signals is also the same as the number of image display devices or headphones. This leads to
system complexity and cost increase. Therefore, even if the technology described in Patent
Document 1 is applied to an image display apparatus in which a plurality of display areas are set,
the above-described problem can not be solved.
[0008]
The present invention has been made in view of the above circumstances, and when a plurality of
image display areas are set on the screen, the operator of the remote control is specified when
the remote control is operated, and the specified operator is specified. It is an object of the
present invention to provide an image display apparatus or the like which reflects the content of
remote control operation on a display area corresponding to the above.
[0009]
An image display apparatus according to the present invention sets a plurality of display areas on
09-05-2019
3
a display screen, displays an image on each of the plurality of display areas, and outputs audio
accompanying the image to be displayed to each of different audio output apparatuses. In the
image display device, a correspondence storage unit that stores correspondence relationships of
audio output devices that are to output voices associated with the display region and images
corresponding to the display regions, a signal reception unit that receives a signal, and A position
specifying unit for specifying the position of each of the audio output devices, a second position
specifying unit for specifying the position of the transmission source of the signal, a position of
the sound output device specified by the position specifying unit, and the second position
specifying A device specifying unit for specifying an audio output device closest to the position of
the signal source based on the position of the signal source specified by the unit; and the voice
specified by the device specifying unit A control unit that extracts a display area corresponding
to a force device from the correspondence storage unit and performs control corresponding to a
signal received by the signal reception unit on an image or a sound corresponding to the
extracted display area; It is characterized by
[0010]
In the present invention, based on the position of the audio output device and the position of the
signal source, the audio output device closest to the signal source is identified.
The display area corresponding to the specified audio output device is extracted from the
corresponding storage unit, and control corresponding to the received signal is performed on the
image or sound corresponding to the extracted display area, so that the user who operated the
operation can view it. It is possible to reliably operate the display area being displayed.
[0011]
The image display device according to the present invention is characterized in that the position
specifying unit acquires an image obtained by imaging the audio output device, and specifies the
position of the audio output device based on the acquired image. I assume.
[0012]
In the present invention, by imaging the audio output device, the position of the audio output
device is specified, and the operated user is specified, so that the display area being watched by
the operated user can be operated with certainty. It becomes possible.
09-05-2019
4
[0013]
An image display apparatus according to the present invention sets a plurality of display areas on
a display screen, and displays an image on each of the plurality of display areas, wherein the
display area and a person corresponding to each of the display areas A second correspondence
storage unit for storing the correspondence of the face, a signal receiving unit for receiving the
signal, a second position specifying unit for specifying the position of the transmission source of
the signal, and each position of the face of the person A position of a source of the signal based
on a third position specifying unit, the position of the source of the signal specified by the second
position specifying unit, and the position of the face of the person specified by the third position
specifying unit And an identification unit for identifying the face of the person closest to the
image, and a display area corresponding to the face of the person identified by the identification
unit, from the second correspondence storage unit, and corresponding to the extracted display
area Said image or sound Characterized in that it comprises a control unit for controlling the
receiving unit corresponding to the received signal.
[0014]
In the present invention, based on the position of the human face and the position of the signal
source, the person closest to the signal source is identified.
The display area corresponding to the identified person is extracted from the second
correspondence storage unit, and control corresponding to the received signal is performed on
the image or sound corresponding to the extracted display area, so that the person who operates
can appreciate It is possible to reliably operate the display area being displayed.
[0015]
In the image display device according to the present invention, the second position specifying
unit may be configured to obtain an image obtained by imaging the source of the signal and to
specify the position of the source based on the acquired image. It is characterized by
[0016]
In the present invention, since the position of the signal source is specified using the image
obtained by capturing the signal, the position can be easily specified.
[0017]
09-05-2019
5
In the image display device according to the present invention, the second correspondence
storage unit stores the correspondence relationship between the display area and the audio
output device to which the sound associated with the image corresponding to the display area is
to be output; The apparatus may further include an audio transmission unit that transmits audio
corresponding to each of the display areas to the corresponding audio output device.
[0018]
According to the present invention, it is possible to reliably provide the user with a sound
corresponding to the display area that the user is watching.
[0019]
An image display system according to the present invention comprises the above-described
image display device, operation input means for transmitting the signal to the image display
device, and at least one audio output device associated with the display area. It features.
[0020]
In the present invention, it is possible to reliably operate the display area that the user who
operates the operation input unit is watching.
[0021]
A television receiver according to the present invention includes the above-described image
display device and a tuner unit for receiving television broadcast, and displays the image
according to the television broadcast received by the tuner unit on the image display device. It is
characterized by having.
[0022]
According to the present invention, it is possible to reliably operate the display area being
watched by the operated user.
[0023]
In the present invention, based on the position of the audio output device and the position of the
signal source, the audio output device closest to the signal source is identified.
09-05-2019
6
The display area corresponding to the specified audio output device is extracted from the
corresponding storage unit, and control corresponding to the received signal is performed on the
image or sound corresponding to the extracted display area, so that the person who operated It is
possible to reliably operate the display area being displayed.
[0024]
FIG. 2 is a block diagram showing an example of a hardware configuration of a television receiver
according to Embodiment 1.
It is a wave form diagram which shows the example of the light emission pattern of the light
emission part of a headphone.
5 is a flowchart of an operation of the television receiver according to the first embodiment.
It is explanatory drawing which shows an example of the record layout of a multi-screen
database notionally.
It is explanatory drawing which shows an example of the image showing the position of a
headphone and a remote control.
It is explanatory drawing which shows the example of the control corresponding to the operation
signal notionally.
FIG. 16 is a block diagram showing an example of a hardware configuration of a television
receiver according to a second embodiment.
FIG. 18 is an explanatory view conceptually showing an example of a record layout of the multiscreen database in the second embodiment.
09-05-2019
7
7 is a flowchart of an operation of the television receiver according to the second embodiment.
It is explanatory drawing which shows an example of the image showing the position of a user's
face, and a remote control.
[0025]
Hereinafter, an image display apparatus according to an embodiment of the present invention
will be described using a television receiver provided with the image display apparatus as an
example.
First Embodiment FIG. 1 is a block diagram showing an example of a hardware configuration of a
television receiver 1 (image display device) according to a first embodiment. The television
receiver 1 according to the present embodiment includes a control unit 2, a display unit 3, a
camera 4, an image processing unit 5, a multi-screen database 6, a remote control light receiving
unit 7, an audio transmission unit 8 and a tuner unit 9. Further, headphones H1 and H2 are
provided as audio output destinations (audio output devices).
[0026]
The control unit 2 controls the operation of each component. The control unit 2 includes a
central processing unit (CPU) 2a, and a read only memory (ROM) 2b and a random access
memory (RAM) 2c are connected via a bus. The control unit 2 realizes the processing of the
television receiver 1 according to the present embodiment by reading out the computer program
recorded in the ROM 2 b to the RAM 2 c and executing it.
[0027]
The ROM 2 b is a non-volatile memory such as a mask ROM storing control programs necessary
for the operation of the computer, and an Electrically Erasable Programmable (EEP) ROM.
[0028]
The RAM 2 c is a volatile memory such as a DRAM (Dynamic RAM) or an SRAM (Static RAM) that
temporarily stores various data generated when the arithmetic processing of the control unit 2 is
09-05-2019
8
performed.
[0029]
The display unit 3 displays an image of a television broadcast received by the tuner unit 9 or an
input image from an external device.
In the case of the multi-screen mode, the screen (display screen) is divided into a plurality of
display areas, and an image is displayed for each display area.
[0030]
The camera 4 includes an imaging device such as a complementary metal oxide semiconductor
(CMOS) or a charge coupled device image sensor (CCD) and an optical system such as a lens.
The camera 4 captures the front of the television receiver 1 and captures an infrared signal
emitted by the light emitting unit L1 of the headphone H1, an infrared signal emitted by the light
emitting unit L2 of the headphone H2, and an infrared signal which is a control signal emitted by
the remote control R. Do. In the present embodiment, since infrared rays are captured, it is
desirable to use a camera having high sensitivity to infrared rays. Although the camera 4 is
assumed to be included in the television receiver 1, it is not limited thereto. For example, image
input such as Universal Serial Bus (USB), Institute of Electrical and Electronic Engineers (IEEE)
1394, High-Definition Multimedia Interface (HDMI), Digital Visual Interface (DVI), S-VIDEO, etc. It
is also possible to provide an interface used for and connect a separately prepared camera.
[0031]
The image processing unit 5 is configured by a DSP (Digital Signal Processor) or the like, and
performs processing of an image captured by the camera 4.
[0032]
The multi-screen database 6 (corresponding storage unit) stores the correspondence between
each display area of the screen and the headphones H1 and H2 as audio output destinations.
09-05-2019
9
[0033]
The remote control light receiving unit 7 (signal receiving unit) receives a remote control signal
(signal) from the remote control R (transmission source) and outputs the signal to the control
unit 2.
The audio transmission unit 8 transmits the audio corresponding to each display area to an
appropriate headphone (audio output device) based on the correspondence relationship stored in
the multi-screen database 6.
The tuner unit 9 receives a television broadcast signal, and extracts image information and an
audio signal from the received television broadcast signal. The extracted image information is
displayed on the appropriate display area of the display unit 3, and the extracted audio
information is output to the speaker or transmitted by the audio transmission unit 8 to the
appropriate headphone.
[0034]
The headphones H1 and H2 include light emitting units L1 and L2, respectively. The light
emitting units L1 and L2 include infrared light emitting diodes (LEDs) as light emitting elements,
and can be distinguished from each other by the blinking interval. The light emitting element is
not limited to the infrared LED. It may be a visible light LED or a small light bulb. When a light
emitting element other than an infrared LED is used, the characteristics of the camera 4 are also
reviewed accordingly.
[0035]
FIG. 2 is a waveform diagram showing an example of a light emission pattern of the light
emitting portions L1 and L2 of the headphones H1 and H2. FIG. 2A shows a light emission
pattern of the light emitting portion L1 of the headphone H1, and FIG. 2B shows a light emission
pattern of the light emitting portion L2 of the headphone H2. 2A and 2B, the vertical axis
represents signal intensity (emission intensity) i, and the horizontal axis represents time t. As
shown in FIG. 2, the light emission cycle t1 of the light emitting unit L1 and the light emission
09-05-2019
10
cycle t2 of the light emitting unit L2 have different times. In FIG. 2, when the light emitting unit
L2 is lit, the light emitting unit L1 is also synchronized on the time axis so as to simultaneously
illuminate, but it is not necessary to be synchronized.
[0036]
Since both the remote control signal and the signals from the light emitting portions L1 and L2
use infrared rays, it is necessary to take measures to prevent interference in the remote control
light receiving portion 7. The remote control signal is PPM (Pulse Position Modulation)
modulated using a carrier wave having a frequency of 33 kHz to 40 kHz. The remote control
light receiver 7 is made to pass only the carrier wave frequency by a frequency filter. Therefore,
in order to prevent interference, the light emitting units L1 and L2 may use carrier frequencies
other than 33 kHz to 40 kHz.
[0037]
Next, the operation of the television receiver 1 will be described. Here, two display areas d1 and
d2 are displayed on the screen of the display unit 3, and one user views each of the display areas
d1 and d2 as (A, B). Two users A and B wear headphones H1 and H2, respectively. FIG. 3 is a
flowchart of the operation of the television receiver 1 according to the first embodiment. FIG. 4 is
an explanatory view conceptually showing an example of the record layout of the multi-screen
database 6.
[0038]
The CPU 2a of the controller 2 of the television receiver 1 stores the correspondence between
the display areas d1 and d2 and the headphones H1 and H2 in the multi-screen database 6 (S1).
As shown in FIG. 4, the display area d1 and the headphone H1 are associated with each other,
and the display area d2 and the headphone H2 are associated with each other. The
correspondence between the display area and the headphones may be defined in advance
according to the number of display areas, or when the power of the television receiver 1 is
turned on, when multi-screen display is started, when the user starts viewing, etc. The user may
make settings. When defining in advance, when the power of the television receiver 1 is turned
on or when multi-screen display operation is performed, speech is sent to the headphones 1
in the display area d1 and headphones 2 in the display area d2. Send voice to By displaying
09-05-2019
11
,
the user is notified of the correspondence between the display area and the headphones.
[0039]
The CPU 2a (position specifying unit) specifies the positions of the headphones H1 and H2 (S2).
The CPU 2 a causes the image processing unit 5 to process an image captured by the camera 4.
The image processing unit 5 performs noise removal by filtering or the like, and binarization
processing by a threshold. The CPU 2a specifies the positions of the light emitting units L1 and
L2 included in each of the headphones H1 and H2 using a plurality of frames of the image after
processing. Since the light emitting units L1 and L2 have different light emitting cycles, the CPU
2a may specify the positions of the light emitting units L1 and L2 as coordinate values on the
image by analyzing images of a plurality of frames arranged in time series. It becomes possible.
When the positions of the light emitting units L1 and L2 can be specified, an image indicating the
position of the light emitting unit L1 or L2, for example, an image in a state where the light
emitting units L1 or L2 emits light is stored in the RAM 2c. Instead of the image, the position of
the light emitting unit L1 or L2 may be stored as a coordinate value on the image. Further, both
the image and the coordinate values may be stored.
[0040]
The CPU 2a determines whether the remote control signal has been received (S3). The CPU 2a
can determine whether or not the remote control signal is received by polling the remote control
light receiving unit 7 or by an interrupt from the remote control light receiving unit 7.
[0041]
If the remote control signal is not received (NO in S3), the CPU 2a returns the process to S2.
[0042]
When the remote control signal is received (YES in S3), the CPU 2a (second position specifying
unit) specifies the position of the remote control R (S4).
The remote control signal is constituted by the blinking of the infrared ray LED as in the light
09-05-2019
12
emitting portions L1 and L2 of the headphones H1 and H2. Therefore, as in the case of
specifying the positions of the headphones H1 and H2, it is possible to specify the position of the
remote control R as a coordinate value on the image using images of a plurality of frames. When
the position of the remote control R can be specified, an image indicating the position of the
remote control R, for example, an image in a state where the light emitting unit of the remote
control R is emitting light is stored in the RAM 2c. The position coordinates of the light emitting
unit of the remote controller R may be stored as an attribute of the image. The position of the
remote controller R may be stored as coordinate values on the image without storing the image.
If there is a time lag before the remote control light receiver 7 receives the remote control signal
and the CPU 2a detects the reception, it is assumed that the position of the remote control R can
not be identified in the image after the CPU 2a detects it. Therefore, an image of a predetermined
time in the past may be stored in a storage unit such as a RAM (not shown) included in the image
processing unit 5 or a RAM 2 c included in the control unit 2. When the position of the remote
control R can not be identified by the latest image, the past image is used.
[0043]
The CPU 2a determines the positional relationship between the remote control R and the
headphones H1 and H2. FIG. 5 is an explanatory view showing an example of an image showing
the positions of the headphones H1 and H2 and the remote control R. As shown in FIG. It is
possible to obtain the image shown in FIG. 5 by combining the images showing the positions of
the remote control R and the headphones H1 and H2 stored in the RAM 2c. In the following
description, the following coordinate system is used. The upper left of the image is the origin, the
horizontal direction from the origin is the X axis, and the vertical direction from the origin is the
Y axis.
[0044]
First, the CPU 2a (device specifying unit) determines the positional relationship between the
headphone H1 and the remote control R. The CPU 2a determines whether the remote control R
and the headphone H1 are positioned substantially on a straight line on the Y axis (S5). It is
determined whether the remote control R is in a region (X1 <X2) having a predetermined width
in the X-axis direction including the light emitting portion L1 of the headphone H1, here, an X
coordinate of X1 or more and X2 or less. When the remote control R is included in the
predetermined area, the CPU 2a determines that the positional relationship between the remote
control R and the headphone H1 is substantially on a straight line. If the CPU 2a determines that
the remote control R and the headphone H1 are positioned substantially on a straight line (YES in
09-05-2019
13
S5), the process proceeds to S6.
[0045]
In the example of FIG. 5, the remote control R is included in the display area having a
predetermined width in the X-axis direction including the light emitting portion L1 of the
headphone H1. Therefore, the CPU 2a determines that the remote control R and the headphone
H1 are positioned substantially on a straight line (YES in S5). The CPU 2a determines that the
user wearing the headphone H1 operates the remote control R, and advances the process to S6.
[0046]
If it is determined that the positional relationship between the headphone H1 and the remote
control R is not substantially on a straight line (NO in S5), the CPU 2a determines whether the
remote control R and the headphone H1 are close (S8). The distance between the remote control
R and the light emitting unit L1 of the headphone H1 is calculated. If the calculated distance is
equal to or less than the predetermined distance, it is determined that the remote control R and
the headphone H1 are close (YES in S8). If the calculated distance is larger than the
predetermined distance, it is determined that the remote control R and the headphone H1 are not
close to each other (NO in S8). The distance is a Euclidean distance between the center points or
the center points of the display areas where the remote controller R and the light emitting unit
L1 of the headphone H1 are captured. That is, it is a value obtained by square-suming the
difference between the X coordinates of two points and the difference between the Y coordinates
and taking the square root.
[0047]
If the CPU 2a determines that the remote control R and the headphone H1 are not near each
other (NO in S8), the process proceeds to S9. The CPU 2a determines whether all the headphones
have been checked (S9). If all the headphones have not been checked (NO in S9), the process
returns to S5 to determine the other headphones.
[0048]
If the CPU 2a determines that the remote control R and the headphone H1 are close to each
09-05-2019
14
other (YES in S8), the CPU 2a determines that the user wearing the headphone H1 operates the
remote control R, and advances the process to S6.
[0049]
The CPU 2a (control unit) refers to the multi-screen database 6 and acquires the display area d1
corresponding to the headphone H1 (S6).
The CPU 2a executes control corresponding to the operation signal obtained from the remote
control signal on the display area d1 (S7).
[0050]
Even if all the headphones are checked, if the headphones which are substantially in line with the
remote control R are not found and the headphones close to the remote control R are not found
(YES in S9), the CPU 2a performs an error process (S10). For example, while ignoring the remote
control signal, the display unit 3 displays a message indicating that the operation has been
ignored because the operator of the remote control R can not detect it.
[0051]
In the above description, in order to determine which one of the headphones H1 and H2 is closer
to the remote control R, first, it is checked whether the remote control R and the headphone H1
are positioned on a straight line (S5). If the remote control R and the headphone H1 are not
positioned on a straight line, the distance between the remote control R and the headphone H1 is
calculated, but it is not limited thereto. The process of determining whether it is positioned on a
straight line is omitted, the distance between the remote control R and the headphone H1 is
calculated, and if the distance is less than a predetermined value, even if it is determined that the
remote control R and the headphone H1 are close good. Also, the process of determining whether
it is positioned on a straight line is omitted, the distance between the remote control R and the
headphone H1 and the distance between the remote control R and the headphone H2 are
calculated, and the headphone with the shortest distance from the remote control R is It may be a
headphone closest to R.
09-05-2019
15
[0052]
FIG. 6 is an explanatory view conceptually showing an example of control corresponding to the
operation signal. FIG. 6A shows the case where the channel switching is received as the operation
signal, and the display channel of the display area d1 is changed from Ch1 to Ch2. FIG. 6B shows
the case where the input switching is received as the operation signal, and the input source of the
display area d1 is changed from input 1 to input 2. FIG. 6C shows the case where the volume
switch is received as the operation signal, and the volume of the display area d1 is changed from
10 to 20.
[0053]
In the above description, although the display area and the headphones have a one-to-one
relationship, the present invention is not limited to this. For example, it is assumed that the
display area is two, d1 and d2, the number of users is three, and the headphones H1, H2, and H3
are worn, respectively. In such a case, the display area d1 may be associated with the
headphones H1 and the headphones H2, and the display area d2 may be associated with the
headphones H3. When the user wearing the headphone H1 or the headphone H2 operates the
remote control R, the operation is reflected in the display area d1. When the user wearing the
headphone H3 operates the remote control R, the operation is reflected in the display area d2.
[0054]
Also, not all of the users may wear headphones, and there may be users who are listening to
sound through speakers. In such a case, the multi-screen database 6 stores records not
associated with the headphones. In operation, in FIG. 3, S10 is not an error process, but is the
following process. A display area not associated with the headphones is acquired from the multiscreen database 6. The control corresponding to the operation signal obtained from the remote
control signal is executed for the acquired display area.
[0055]
As described above, in the television receiver 1 according to the present embodiment, the
09-05-2019
16
headphones Hx (x = 1 or 2) corresponding to the remote control R based on the positions of the
headphones H1 and H2 and the positions of the remote control R. Identify). A display area dx (x =
1 or 2) corresponding to the identified headphone Hx is extracted from the multi-screen database
6, and an operation signal obtained from the remote control signal for an image or sound
corresponding to the extracted display area dx Perform control corresponding to Therefore, it
becomes possible to reliably operate the screen that the user who operated the remote control R
is watching.
[0056]
Second Embodiment FIG. 7 is a block diagram showing an example of a hardware configuration
of a television receiver 10 according to a second embodiment. The same components as those of
the television receiver according to the first embodiment shown in FIG. 1 will be assigned the
same reference numerals and descriptions thereof will be omitted. The television receiver 10
according to the present embodiment specifies the user who has operated the remote control R
using face recognition, not the headphones worn by the user.
[0057]
FIG. 8 is an explanatory view conceptually showing an example of the record layout of the multiscreen database 60 according to the second embodiment. The multi-screen database 60 (second
correspondence storage unit) stores correspondences between data related to the user's face and
the display area of the screen viewed by the user. The data relating to the user's face is, for
example, face image data, but it may be data indicating a feature amount of the face, such as data
necessary for recognition. As data to be stored in the multi-screen database 60, it is necessary to
operate the remote control R at the start of viewing to register face data of the user.
[0058]
Next, the operation of the television receiver 10 will be described. As in the first embodiment,
two image display areas are displayed on the screen, and one user views each display area. Two
users are A and B, respectively. FIG. 9 is a flowchart of the operation of the television receiver 10
according to the second embodiment. The CPU 2a of the control unit 2 of the television receiver
10 stores the correspondence between the display area and the user's face in the multi-screen
database 60 (S11). Display an image captured by the camera 4 on the display unit 3 and specify
09-05-2019
17
a display area in which the display area is displayed after the user specifies the display area in
which the user's face is displayed with the remote control R . The CPU 2a sets an image of the
display area designated by the user as a face image, and stores the display area designated by the
user in the multi-screen database 60 as a display area corresponding to the face image. Do the
same for all users. The above-described operation may be appropriately performed as needed,
such as when the power of the television receiver 10 is turned on, when switching to the multiscreen, when the display area is increased, and the like. In addition, whether or not to make the
multi-screen display immediately after the power is turned on may be along the state at the time
of power off. That is, when the power is disconnected in the multi-screen, the multi-screen
display is made immediately after the power-on, and in the case where the power is disconnected
in the single screen, the display immediately after the power-up is the single-screen.
Furthermore, regardless of the power-off state, a single screen display may always be provided
immediately after power-on.
[0059]
Next, the CPU 2a (third position specifying unit) specifies the position of the user (S12). The CPU
2 a causes the image processing unit 5 to perform face recognition processing using the image
captured by the camera 4 and the face image or face feature amount of the user registered in the
multi-screen database 60. For face recognition processing, for example, known techniques such
as template matching and principal component analysis may be used. The CPU 2a stores the
position in the image of the face of each user in association with the display area viewed by the
user. The storage destination is a RAM (not shown) provided in the image processing unit 5 or a
RAM 2 c provided in the control unit 2 or the like.
[0060]
The CPU 2a determines whether the remote control signal has been received (S13). The CPU 2a
can determine whether or not the remote control signal is received by polling the remote control
light receiving unit 7 or by an interrupt from the remote control light receiving unit 7.
[0061]
If the remote control signal is not received (NO in S13), the CPU 2a returns the process to S12.
[0062]
09-05-2019
18
When the remote control signal is received (YES in S13), the CPU 2a specifies the position of the
remote control R (S14).
The CPU 2 a causes the image processing unit 5 to specify the light emission position of the
remote control R from the image captured by the camera 4. When the position of the remote
control R can be specified, an image indicating the position of the remote control R, for example,
an image in a state where the light emitting unit of the remote control R is emitting light is stored
in the RAM 2c. At this time, position coordinates of the light emitting unit of the remote control R
may be stored as an attribute of the image. If there is a time lag before the remote control light
receiver 7 receives the remote control signal and the CPU 2a detects the reception, it is assumed
that the position of the remote control R can not be identified in the image after the CPU 2a
detects it. Therefore, an image of a predetermined time in the past may be stored in a storage
unit such as a RAM (not shown) included in the image processing unit 5 or a RAM 2 c included in
the control unit 2. When the position of the remote control R can not be identified by the latest
image, the past image is used.
[0063]
The CPU 2a determines the positional relationship between the user's face position and the
remote control R. FIG. 10 is an explanatory view showing an example of an image showing the
positions of the faces of the users A and B and the remote control R. As shown in FIG. The remote
controller R and the faces of the users A and B are imaged. In the following description, the same
coordinate system as that of the first embodiment is used.
[0064]
The CPU 2a (specifying unit) determines the positional relationship between the face of the user
A and the remote control R. The CPU 2a determines whether the face of the user A and the
remote control R are positioned substantially on a straight line on the Y axis (S15). It is
determined whether there is a remote control R in a display area having a predetermined width
in the X-axis direction including the face of the user A, and a display area (X3 <X4) with an X
coordinate of X3 or more and X4 or less. When the remote controller R is included in the
predetermined display area, the CPU 2a determines that the positional relationship between the
face of the user A and the remote controller R is substantially on a straight line. When it is
09-05-2019
19
determined that the CPU 2a is on a substantially straight line (YES in S15), the process proceeds
to S16.
[0065]
In the example of FIG. 10, the remote control R is included in an area having a predetermined
width in the X-axis direction including the face of the user A. Therefore, the CPU 2a determines
that the face of the user A and the remote control R are positioned substantially on a straight line
(YES in S15). The CPU 2a determines that the user A has operated the remote control R, and
advances the process to S16.
[0066]
When it is determined that the positional relationship between the face of the user A and the
remote control R is not substantially on a straight line (NO in S15), the CPU 2a determines
whether the face of the user A is close to the remote control R S18). The distance between the
face of the user A and the remote control R is calculated. If the calculated distance is equal to or
less than the predetermined distance, it is determined that the face of the user A and the remote
control R are close (YES in S18). If the calculated distance is larger than the predetermined
distance, it is determined that the face of the user A and the remote control R are not close to
each other (NO in S18). As in the first embodiment, the distance is the Euclidean distance
between the center points or the center points of the remote control R and user A's face.
[0067]
When the CPU 2a determines that the face of the user A and the remote control R are not close
to each other (NO in S18), the process proceeds to S19. The CPU 2a determines whether or not
the faces of all the users have been checked (S19). If the faces of all the users have not been
examined (NO in S19), the process returns to S15 to determine the faces of the other users.
[0068]
If the CPU 2a determines that the face of the user A is close to the remote control R (YES in S18),
the CPU 2a determines that the user A has operated the remote control R, and advances the
process to S16.
09-05-2019
20
[0069]
The CPU 2a acquires the display area d1 corresponding to the face of the user A from the data
stored in the storage unit such as the RAM 2c, in which the face position of each user in the
image is associated with the display area viewed by the user S16).
The CPU 2a executes control corresponding to the operation signal obtained from the remote
control signal on the display area d1 (S17).
[0070]
Even if all the users are checked, if the user's face substantially on a straight line with the remote
control R is not found and the user's face close to the remote control R is not found (YES in S19),
the CPU 2a performs an error process ( S20). For example, while ignoring the remote control
signal, the display unit 3 displays a message indicating that the operation has been ignored
because the operator of the remote control R can not detect it.
[0071]
In the above description, when specifying the face of the user closest to the remote control R,
first, after determining whether the face and the remote control R are positioned on a straight
line, it is determined whether the face and the remote control R are close. But it is not limited to
it. The process of determining whether the face and the remote control R are located on a
straight line is omitted, the distance between the face and the remote control R is calculated, and
if the distance is less than a predetermined value, the distance between the face and the remote
control R is It may be determined that they are close. In addition, the process of determining
whether the face and the remote control R are positioned on a straight line is omitted, and the
distances between the faces of all the users and the remote control R are determined,
respectively. It may be specified as the face of the user closest to.
[0072]
09-05-2019
21
In the above description, face recognition processing is performed in S12 to specify at which
position in the image the face of each user is located. However, the face recognition is not
performed, and it may be specified only at which position on the image the person's face is
shown. In S16, when the face of the user positioned on a straight line with the remote control R
or the face of the user close to the remote control R is determined, face recognition may be
performed to acquire a corresponding display area. In such a case, since the face recognition
process is not necessary in S12, the process is made efficient.
[0073]
Further, in the above description, although the display area and the (face of the user) have a oneto-one relationship, the present invention is not limited thereto. For example, the display area d1
may be associated with (the face of) the user A and (the face of the user B), and the display area
d2 may be associated with (the face of the user C). In such a case, when the user A or the user B
operates the remote control R, the operation is reflected in the display area d1. When the user C
operates the remote control R, the operation is reflected in the display area d2.
[0074]
Furthermore, in the above description, the multi-screen database 60 stores the correspondence
between the data on the user's face and the display area of the screen viewed by the user, but is
not limited thereto. As in the first embodiment, a table (table T1) may be provided in which the
audio corresponding to each display area of the screen is stored in correspondence with the
headphones H1 and H2 as audio output destinations. It is good also as having a table (table T2)
which stored correspondence with headphone H1 and H2 and data about a user's face. In this
case, the CPU 2a specifies the face of the user corresponding to the remote control R. The CPU
2a identifies the headphones corresponding to the identified user face using the table T2. The
CPU 2a specifies a display area corresponding to the specified headphones using the table T1.
Alternatively, one table in which the display area, the headphones, and the user's face are
associated may be provided as a multi-screen database. By storing the correspondence between
each display area and the headphones, the audio transmission unit 8 can transmit the audio
associated with each display area to an appropriate headphone.
[0075]
09-05-2019
22
As described above, in the television receiver 10 according to the present embodiment, the
remote control R is selected based on the positions of the faces of the users A and B and the
position of the remote control R using the user's face recognition technology. Identify the user
corresponding to. A display area corresponding to the identified user's face is extracted from the
multi-screen database 60, and control corresponding to the operation signal received from the
remote control R is performed on the image or sound corresponding to the extracted display
area. Therefore, it becomes possible to reliably operate the screen that the user who operated the
remote control R is watching.
[0076]
In the first embodiment described above, the user identification is performed by the face
recognition in the second embodiment by the light emitting units L1 and L2 attached to the
headphones H1 and H2, but the present invention is not limited thereto. For example, the
identification may be made by making a unique mark for each headphone, and identifying the
headphones by the difference in the mark, by making the color and shape of each headphone
different.
[0077]
The user may wear not only a single headphone but also a combination of glasses and
headphones for listening to a 3D image. In this case, the light emitting unit or the mark may be
attached to the glasses. Further, identification may be performed by the color or shape of the
glasses.
[0078]
The embodiments described above are to be considered in all respects as illustrative and not
restrictive. The scope of the present invention is indicated not by the meaning described above
but by the claims, and is intended to include all modifications within the meaning and scope
equivalent to the claims.
[0079]
09-05-2019
23
1, 10 Television receiver 2 control unit 2 a CPU 2 b ROM 2 c RAM 3 display unit 4 camera 5
image processing unit 6, 60 multi-screen database 7 remote control light receiving unit 8 audio
transmission unit 9 tuner unit
09-05-2019
24
1/--страниц
Пожаловаться на содержимое документа