close

Вход

Забыли?

вход по аккаунту

JP2007110582

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2007110582
An audio corresponding to an image to be displayed can be output so as to enhance the sense of
reality. An audio output position control unit analyzes an image of image data supplied from an
image processing unit, and if it is a scene where a person speaks, an output destination of audio
data supplied from an audio processing unit. The voice output unit 39 is supplied with voice data
so that the voice of the speaker is controlled by the speaker close to the position of the speaker.
The audio output unit 39 has a plurality of speakers, and outputs audio data supplied from the
audio output position control unit 37 from the speaker designated by the audio output position
control unit 37. The present invention can be applied to an image display device. [Selected
figure] Figure 3
Image display device and method, and program
[0001]
The present invention relates to an image display apparatus and method, and a program, and
more particularly, to an image display apparatus and method, and a program capable of
outputting a sound corresponding to an image to be displayed so as to enhance the sense of
reality.
[0002]
A conventional television receiver receives a television signal broadcasted from a broadcast
station, and displays an image as a television broadcast program or outputs an audio.
[0003]
09-05-2019
1
For example, when the audio included in the television signal received by the television receiver
1 shown in FIG. 1 is monaural, the same audio is output from both the speaker 2A and the
speaker 2B of the television receiver 1.
Therefore, for example, even when any one of the persons 3A to 3C speaks on the screen, the
speakers (speaker 2A and speaker 2B) from which the sound is output do not change.
Furthermore, in the case of the monaural system, the same sound is output from the left and
right, so the user who is the viewer may hear any sound of the person 3A to the person 3C from
the same direction, which may reduce the sense of presence. is there.
[0004]
On the other hand, when the audio included in the television signal received by the television
receiver 1 shown in FIG. 1 is in the stereo system, the person 3A to the person experienced by
the user due to the difference in the volume of the left and right audio. The 3C voice generation
directions (the directions of the voice generation places viewed from the user) can be changed
from one another.
[0005]
However, even in the case of this stereo system, the speakers from which the sound is output are
the speakers 2A and 2B and do not change, so it is difficult to change the direction of sound
generation extremely, so there is a sense of presence There is a risk of reduction.
[0006]
Also, in general, on the broadcast station side, it is impossible to estimate what kind of television
receiver each viewer uses to view a program.
However, the viewing environment of the user, such as the positional relationship between the
left and right speakers, the characteristics of the speakers, the position of the user, and the
installation place of the television receiver, differs depending on the user.
09-05-2019
2
Therefore, depending on the user's viewing environment, it differs greatly from the environment
assumed when the stereo audio signal included in the television signal is generated, and it is
difficult for the user to obtain the intended presence on the broadcasting station side There was a
risk of becoming
[0007]
On the other hand, a sound from a specific sound source is acquired in advance to generate an
acoustic signal, and an information signal is generated from the acoustic signal, the video signal,
and the position signal detection unit, and recorded. A method is conceivable in which sound
output corresponding to the display position is performed (see, for example, Patent Document 1).
[0008]
Japanese Patent Application Publication No. 2003-264900
[0009]
However, in this case, it is necessary to previously acquire sound from a specific sound source
and generate an information signal, and it has been difficult to apply to receiving and displaying
a general conventional television broadcast or the like.
[0010]
The present invention has been made in view of such a situation, and is capable of outputting a
sound corresponding to an image to be displayed so as to enhance the sense of reality.
[0011]
An image display apparatus according to one aspect of the present invention is an image display
apparatus for displaying an image, and includes an image display means for displaying an image,
and sounds corresponding to the image displayed by the image display means from a plurality of
positions. An audio output means for outputting, and an audio output position control means for
analyzing the image, controlling the audio output means according to the contents of the image,
and selecting a position for outputting the audio.
[0012]
The audio output means may include a plurality of speakers, and the audio output position
09-05-2019
3
control means may control the audio output means to select a speaker for outputting the audio.
[0013]
The voice output means can virtually control the output position of the voice by controlling the
output timing of the voice, and the voice output position control means controls the voice output
means, and the output timing The virtual position to which the voice is output can be controlled
by selecting.
[0014]
The voice output position control means is configured to divide the display screen of the image
display means into a plurality of predetermined areas, and a pixel value is previously determined
for each of the areas divided by the screen area division means. Means for calculating an average
value of differences between frames of luminance values of each pixel having the predetermined
feature, and the difference average value calculated by the difference average value calculation
means; An audio output control means may be provided to control the audio output means and
select a position at which the audio is to be output.
[0015]
The voice may be a single channel voice.
[0016]
The audio may be a plurality of channels of audio.
[0017]
The audio output position control means is a difference which is an average value of differences
between frames of luminance values of respective pixels having predetermined characteristics in
which pixel values are predetermined for each of a plurality of regions allocated to the image.
The audio specified by differential average value calculation means for calculating an average
value, volume confirmation means for confirming the volume of each of the plurality of channels,
and identifying the output position of audio, and the volume confirmed by the volume
confirmation means Said voice based on the difference average value correction means for
correcting the difference average value calculated by the difference average value calculation
means based on the output position of the voice, and the difference average value corrected by
the difference average value correction means; The voice output so as to cause the voice to be
output from a region specifying means for specifying a region where the voice is output and a
position corresponding to the region specified by the region specifying means It can be made to
09-05-2019
4
and an audio output control means for controlling the stage.
[0018]
The audio output position control means is a difference which is an average value of differences
between frames of luminance values of respective pixels having predetermined characteristics in
which pixel values are predetermined for each of a plurality of regions allocated to the image. A
difference average value calculating unit that calculates an average value, a region specifying
unit that specifies a region in which the voice is output based on the difference average value
calculated by the difference average value calculating unit, and the region specifying unit Volume
adjustment means for determining the adjustment amount of the volume of the output of the
audio by the audio output means based on the specified area where the audio is output, the
volume of each of the plurality of channels is confirmed, and the output of audio is performed A
volume confirmation unit for specifying a position, an area where the voice specified by the area
specification unit is output, and the volume specified by the volume check unit; A comparison
means for comparing the output position of the voice, an adjustment amount correction means
for correcting the adjustment amount of the volume determined by the volume adjustment
means based on the comparison result by the comparison means, and a correction by the
adjustment amount correction means An audio output control means for controlling the audio
output means may be provided to adjust the volume of the audio based on the adjustment
amount of the audio volume.
[0019]
The apparatus further comprises user position handling processing means for detecting a
position of a user who views the image and the voice, and controlling a delay amount of output
timing of the voice by the voice output means according to the position of the user. be able to.
[0020]
The audio output means has a plurality of detachable speakers, and further comprises area
setting means for setting the area divided by the screen area dividing means according to the
position of the connected speaker. can do.
[0021]
An image display method according to one aspect of the present invention is an image display
method of an image display apparatus for displaying an image, which displays an image, analyzes
the displayed image, and responds to the content of the image. Selecting a position at which the
voice corresponding to is output, and outputting the voice from the selected position.
09-05-2019
5
[0022]
A program according to one aspect of the present invention is a program that causes a computer
to perform a process of displaying an image, displays the image, analyzes the displayed image,
and responds to the image according to the content of the image. Selecting a position for
outputting a voice to be output, and outputting the voice from the selected position.
[0023]
In one aspect of the present invention, an image is displayed, the image to be displayed is
analyzed, a position at which sound corresponding to the image is output is selected according to
the content of the image, and sound is output from the selected position. Ru.
[0024]
According to one aspect of the present invention, according to one aspect of the present
invention, an image can be displayed and an audio can be output.
In particular, the sound corresponding to the image to be displayed can be output to enhance the
sense of reality.
[0025]
The embodiments of the present invention will be described below. The correspondence between
the invention described in the present specification and the embodiments of the invention is as
follows.
This description is to confirm that the embodiments supporting the invention described in the
claims are described in the present specification.
Therefore, although described in the embodiments of the invention, even if there is an
embodiment which is not described here as corresponding to the invention, it means that the
embodiment is It does not mean that it does not correspond to the invention.
09-05-2019
6
Conversely, even if the embodiments are described herein as corresponding to an invention, that
means that the embodiments do not correspond to inventions other than the invention. Absent.
[0026]
Moreover, this description does not mean all of the inventions described herein.
In other words, this description is the existence of the invention described herein and not claimed
in the present application, that is, the existence of the invention to be filed in division or added by
correction in the future. It is not something to deny.
[0027]
The image display device according to one aspect of the present invention (for example, the
display device in FIG. 2) corresponds to an image display means (for example, the display unit in
FIG. 3) for displaying an image and the image displayed by the image display means Audio
output means (for example, the audio output unit in FIG. 3) for outputting the audio to be output
from a plurality of positions, analyzing the image, and controlling the audio output means
according to the contents of the image to output the audio And an audio output position control
unit (for example, an audio output position control unit shown in FIG. 3) for selecting a position
to be caused.
[0028]
The audio output means has a plurality of speakers (for example, the speaker in FIG. 2), and the
audio output position control means controls the audio output means to select a speaker for
outputting the audio. Can.
[0029]
The voice output unit virtually controls the output position of the voice by controlling the output
timing of the voice, and the voice output position control unit controls the voice output unit to
select the output timing. By doing this, it is possible to control a virtual position at which the
sound is output (for example, step S133 in FIG. 21).
[0030]
09-05-2019
7
The voice output position control means divides the display screen of the image display means
into screen area division means (for example, the screen area division unit of FIG. 4) for dividing
the display screen into a plurality of predetermined areas, and the screen area division means
Difference average value calculation means (for example, difference average value calculation in
FIG. 4) for calculating the average value of the inter-frame difference value of the luminance
value of each pixel having a predetermined feature whose pixel value is previously determined
Audio output control means (for example, the audio shown in FIG. 4) for controlling the audio
output means based on the difference average value calculated by the difference average value
calculation means, and selecting the position at which the audio is output And an output control
unit).
[0031]
The audio may be a single channel audio (e.g. audio data of FIG. 4).
[0032]
The audio may be audio of a plurality of channels (for example, audio data of FIG. 15).
[0033]
The audio output position control means is a difference which is an average value of differences
between frames of luminance values of respective pixels having predetermined characteristics in
which pixel values are predetermined for each of a plurality of regions allocated to the image.
Difference average value calculation means (for example, difference average value calculation
unit in FIG. 4) for calculating the average value, and volume confirmation means (for example,
FIG. 15) for identifying the sound volume of each of the plurality of channels. A difference
average value correction unit for correcting the difference average value calculated by the
difference average value calculation unit based on the output position of the sound identified and
confirmed by the sound volume confirmation unit and the sound volume confirmation unit An
area specifying means for specifying an area where the voice is output based on the means (for
example, the difference average value correction unit of FIG. 15) and the difference average value
corrected by the difference average value correction means ( For example, an audio output
control unit (for example, FIG. 15) that controls the audio output unit so as to output the audio
from the area identification unit of FIG. 15 and the position corresponding to the area identified
by the area identification unit. And an output control unit of
[0034]
The audio output position control means is a difference which is an average value of differences
between frames of luminance values of respective pixels having predetermined characteristics in
09-05-2019
8
which pixel values are predetermined for each of a plurality of regions allocated to the image. An
area in which the voice is output based on the difference average value calculation unit (for
example, the difference average value calculation unit in FIG. 4) that calculates the average value
and the difference average value calculated by the difference average value calculation unit An
adjustment amount of the volume of the sound output by the sound output unit based on the
area specification unit (for example, the area specification unit in FIG. 17) to specify and the area
where the sound specified by the area specification unit is output Volume adjustment means (for
example, the volume adjustment unit in FIG. 17) that determines the volume, and volume
confirmation means (for example, the volume in FIG. A comparison unit (for example, FIG. 17)
that compares the output position of the voice identified when the sound volume is checked by
the volume check unit and a region where the voice specified by the region specifying unit is
output; A comparison unit), an adjustment amount correction unit (for example, an adjustment
amount correction unit in FIG. 17) for correcting the adjustment amount of the volume
determined by the volume adjustment unit based on the comparison result by the comparison
unit; An audio output control unit (for example, an output control unit shown in FIG. 17) for
controlling the audio output unit so as to adjust the volume of the sound based on the adjustment
amount of the volume corrected by the adjustment amount correction unit You can do so.
[0035]
User position handling processing means (for example, as shown in FIG. 19) which detects the
position of the user viewing the image and the voice, and controls the delay amount of the output
timing of the voice by the voice output means according to the position of the user. It is possible
to further provide a user position handling processing unit).
[0036]
The audio output unit has a plurality of detachable speakers (for example, the speaker in FIG.
22), and performs setting of the area divided by the screen area dividing unit according to the
position of the connected speaker. An area setting unit (for example, the area setting unit in FIG.
24) can be further provided.
[0037]
The image display method or program according to one aspect of the present invention displays
an image (for example, step S5 in FIG. 7), analyzes the displayed image, and corresponds to the
image according to the content of the image. Is selected (for example, step S4 in FIG. 7), and the
voice is output from the selected position (for example, step S6 in FIG. 7).
[0038]
09-05-2019
9
Hereinafter, embodiments of the present invention will be described with reference to the
drawings.
[0039]
FIG. 2 is a diagram showing a configuration example of the appearance of a display device to
which the present invention is applied.
In FIG. 2, the display device 10 has three speakers (speakers 11 to 13) below the area for
displaying an image.
The display device 10 acquires image data and audio data by receiving a television signal or
acquiring content data through an external input, and displays an image corresponding to the
image data as a display image 20 on a monitor. Or the sound corresponding to the display image
20 is output from the speakers 11 to 13.
[0040]
For example, when three persons 21 to 23 talk in display image 20, display device 10 analyzes
display image 20, and the voice of person 21 at the left end is arranged at the left end. The voice
of the person 22 at the center is output from the speaker 12 disposed at the center, and the voice
of the person 23 at the right end is output from the speaker 11 disposed at the right end.
As described above, by controlling the audio output position according to the content of the
display image 20, the display device 10 can output the audio corresponding to the display image
20 so as to enhance the sense of reality.
[0041]
FIG. 3 is a block diagram showing an example of the internal configuration of the display device
10.
[0042]
09-05-2019
10
In FIG. 3, the display device 10 includes an antenna 31, a tuner 32, an external input reception
unit 33, an input control unit 34, an image processing unit 35, an audio processing unit 36, an
audio output position control unit 37, a display unit 38, an audio output unit A control unit 41
and a user instruction receiving unit 42 are provided.
[0043]
The tuner 31 selects (tunes) a broadcast desired by the user, receives the selected television
signal via the antenna 31, and supplies the received television signal to the input control unit 34.
The external input reception unit has an external input terminal for acquiring content data such
as video and audio from the outside, and acquires rough content data to be supplied via a cable
connected to the external input terminal. The data is supplied to the input control unit 34.
[0044]
The input control unit 34 performs processing related to the input of television signals and
content data based on a user instruction or the like, for example, processing for extracting and
separating image data and audio data from television signals and content data. Do.
Further, the input control unit 34 supplies the image data to be output to the image processing
unit 35, and supplies the audio data to be output to the audio processing unit 36.
[0045]
The image processing unit 35 performs, for example, conversion of data format, image
processing for processing an image, such as adjustment of lightness and saturation, etc., to image
data supplied from the input control unit 34, and processing The subsequent image data is
supplied to the display unit 38 to display an image.
The image processing unit 35 also supplies the processed image data to the audio output
position control unit 37.
09-05-2019
11
[0046]
The audio processing unit 36 performs audio processing such as effect processing on audio data
supplied from the input control unit 34, and supplies the processed audio data to the audio
output position control unit 37.
Here, audio data is assumed to be monaural (single channel) information.
The stereo (multi-channel) audio data will be described later.
[0047]
The voice output position control unit 37 analyzes the image of the image data supplied from the
image processing unit 35, and controls the output destination of the voice data supplied from the
voice processing unit 36 if it is a scene where a person speaks. The voice data is supplied to the
voice output unit 39 so that the voice is output from the speaker near the position of the speaker.
[0048]
The display unit 38 has a monitor (not shown) for displaying an image, and displays an image
corresponding to the image data supplied from the image processing unit 35 on the monitor.
[0049]
The audio output unit 39 includes the speakers 11 to 13 shown in FIG. 2, and the audio data
supplied from the audio output position control unit 37 is designated as a speaker designated by
the audio output position control unit 37 (speaker 11 to speaker Output from (13).
[0050]
For example, each of the speakers 11 to 13 of the audio output unit 39 and the audio output
position control unit 37 are connected by different buses, and the audio output position control
unit 37 selects a bus for outputting audio data. , Select a speaker to output audio data.
09-05-2019
12
The audio output unit 39 outputs the audio from the speaker to which the audio data is supplied.
[0051]
The audio output unit 39 has a switching function of switching the output destination, and the
audio output position control unit 37 outputs audio data via the common bus used when
outputting from any of the speakers. Control signal indicating the output destination of the audio
data to the audio output unit 39, the audio output unit 39 switches the switch based on the
control information, and the speaker selected by the audio output position control unit 37 More
audio data may be output.
[0052]
The control unit 41 controls the tuner 32, the external input reception unit 33, the input control
unit 34, the image processing unit 35, the audio processing unit 36, and the audio output, for
example, based on the user instruction received by the user instruction reception unit 42. The
entire display device 10 including the position control unit 37, the display unit 38, and the audio
output unit 39 is controlled.
[0053]
The user instruction receiving unit 42 has a light receiving unit that receives an infrared signal
including a user instruction by receiving infrared light output from a remote commander, which
is an input device operated by the user, for example, When the acquired user instruction is
acquired, it is supplied to the control unit 41.
The user instruction receiving unit 42 may have, for example, a button or a switch, or an input
device such as a keyboard or a mouse.
[0054]
FIG. 4 is a block diagram showing a detailed configuration example of the audio output position
control unit 37. As shown in FIG.
09-05-2019
13
[0055]
In FIG. 4, the audio output position control unit 37 includes a screen area division unit 51, a
difference average value calculation unit 52, a determination unit 53, and an audio output
control unit 54.
[0056]
The screen area division unit 51 divides the display screen into a plurality of areas according to
the arrangement of the speakers, and assigns the plurality of areas to each frame image of the
image data supplied from the image processing unit 35.
[0057]
FIG. 5 shows an example of area division.
As shown in FIG. 5, the display device 10 has three speakers (speakers 11 to 13) arranged in the
horizontal direction.
Therefore, the screen area division unit 51 divides the display image 20 into three areas 61 to 63
obtained by dividing the display image 20 in the horizontal direction.
In FIG. 5, the images of the person 21 to the person 23 included in the display image 20 are
allocated to the area 61 to the area 63, respectively, by this division.
[0058]
Referring back to FIG. 4, the difference average value calculation unit 52 specifies a portion
showing a predetermined feature which is determined in advance in each frame image of the
image data, and calculates the difference between the frames of the luminance value of the
portion. The inter-frame variation of the luminance value of the feature portion is measured.
Then, the difference average value calculation unit 52 calculates the average value of the interframe variation amount for each area.
09-05-2019
14
A detailed configuration example of the difference average value calculation unit 52 will be
described later.
The difference average value calculation unit 52 supplies the calculated difference average value
to the determination unit 53.
[0059]
The determination unit 53 determines whether or not voice output position control is to be
performed based on the value of the difference average value, and notifies the voice output
control unit 54 of the determination result.
The audio output control unit 54 controls the output position of the supplied audio data based on
the determination result of the determination unit 53, and supplies the audio data to any one of
the speakers 11 to 13 of the audio output unit 39.
When not controlling the output position of the audio data, the audio output control unit 54
supplies the audio data to all of the speakers 11 to 13 of the audio output unit 39.
[0060]
FIG. 6 is a block diagram showing a detailed configuration example of the difference average
value calculation unit 52 of FIG.
In FIG. 6, the difference average value calculation unit 52 includes a pixel value conversion unit
71, a frame memory 72, a feature pixel extraction unit 53, a variable management unit 74, a
difference calculation unit 75, a determination unit 76, a difference average calculation unit 77,
and a difference. An average value storage unit 78 is provided.
[0061]
09-05-2019
15
When acquiring the image data and the area information supplied from the screen area division
unit 51, the pixel value conversion unit 71 converts the image data into a predetermined data
format for each pixel, and supplies it to the frame memory 72 and holds it. .
The frame memory 72 holds one frame of image data in frame units.
That is, the frame memory 72 holds the data of the frame image until the process on the next
frame image is started.
Further, the pixel value conversion unit 71 supplies the image data and the area information to
the feature pixel extraction unit 73.
[0062]
The feature pixel extraction unit 73 extracts a feature pixel which is a pixel having a
predetermined feature, which is included in the image data.
For example, the feature pixel extraction unit 73 determines a feature pixel (predetermined color
value) of a person based on hue, saturation, lightness, luminance value, RGB value, or the like for
each pixel of the supplied image data. Extract the pixels included in the range.
The feature pixel extraction unit 73 increments the in-region feature pixel count 81 which is a
variable held and managed by the variable management unit 74 each time a feature pixel is
extracted.
[0063]
The variable managing unit 74 is a variable for counting the number of in-area feature pixels 81,
which is a variable for counting the number of feature pixels in each area, and a difference value
of luminance values between frames of the feature pixels in each area. Is stored, and updating
09-05-2019
16
and updating of the values are managed.
For example, the variable management unit 74 increments the in-region feature pixel count 81
each time the feature pixel extraction unit 73 extracts a feature pixel.
In addition, the variable management unit 74 acquires, for example, the inter-frame difference
value of the luminance value of the feature pixel supplied from the difference calculation unit 75,
and adds the difference value to the in-area difference value sum 82.
Furthermore, the variable management unit 74 provides, for example, the in-region feature pixel
count 81 and the in-region difference value sum 82 to the difference average value calculation
unit 77 as necessary.
[0064]
When the difference calculating unit 75 acquires the image data of one frame before stored in
the frame memory 72, the luminance value of the previous frame and the luminance value of the
current frame are obtained for the feature pixels extracted by the feature pixel extracting unit 73.
Calculate the difference value of
The difference calculating unit 75 supplies the calculated difference value to the variable
managing unit 74.
[0065]
In response to the processing result, the determination unit 76 determines whether all pixels in
the area have been processed. If all the pixels in the area have been processed, the difference
average value calculation unit 77 notifies that effect. Notice.
[0066]
The difference average value calculation unit 77 obtains the in-region feature pixel count 81 and
the in-region difference value sum held in the variable management unit 74, and using them, a
09-05-2019
17
difference average value which is an average value of difference values for each region. Calculate
The difference average value storage unit 78 stores the difference average value calculated by
the difference average value calculation unit 77.
Further, when the difference average value storage unit 78 stores the difference average value
for the entire area, the difference average value is supplied to the determination unit 53.
[0067]
Next, the flow of specific processing of each part will be described.
[0068]
First, an example of the flow of the image display process performed by the display device 10
will be described with reference to the flowchart of FIG. 7.
[0069]
In step S1, the input control unit 34 receives a television signal via the tuner 32, and extracts
image data and audio data from the received television signal.
In step S2, the image processing unit 35 performs image processing on the image data.
In step S3, the audio processing unit 36 performs audio processing on the audio data.
In step S4, the audio output position control unit 37 controls a speaker (audio output position)
that outputs the audio of the audio data according to the image of the image data.
Details of the voice output position control process will be described later.
09-05-2019
18
[0070]
In step S5, the display unit 38 displays the image of the supplied image data.
In step S6, the audio output unit 39 outputs the audio of the supplied audio data from the
speaker (position) based on the control of the audio output position control unit 37.
[0071]
In step S7, the control unit 41 determines whether to end the image display processing. If it is
determined not to end, the processing returns to step S1, and the subsequent processing is
repeated.
If it is determined in step S7 that the image display processing is to be ended, the control unit 41
proceeds to step S8, performs termination processing such as powering off, and terminates the
image display processing.
[0072]
Next, an example of a detailed flow of the audio output position control process executed in step
S4 of FIG. 7 will be described with reference to the flowchart of FIG.
[0073]
When the voice output position control process is started, the screen area division unit 51
divides the screen area into a plurality of areas corresponding to the speaker arrangement in
step S21.
In step S22, the difference average value calculation unit 52 calculates the difference average
value of the pixels (feature pixels) exhibiting a predetermined feature in each of the divided
regions.
09-05-2019
19
Details of the feature pixel difference average value calculation process will be described later.
[0074]
In step S23, the determination unit 53 determines whether or not the difference average value is
equal to or less than a predetermined threshold value which is predetermined in all the regions.
If it is determined that at least one difference average value is equal to or greater than the
threshold, the determination unit 53 advances the process to step S24. In step S24, the audio
output control unit 54 identifies a region where the difference average value is the largest, that
is, for example, a region where the movement of the image showing the feature such as the
mouth is the most intense, and determines that a person lying in that region exists. The speaker
corresponding to the area is selected as the speaker for outputting the sound, and in step S25,
the sound data is supplied to the sound output unit 39 based on the selection, and the speaker
corresponding to the area having the largest difference average value is selected. Output voice.
When the process of step S25 ends, the voice output control unit 54 proceeds with the process
to step S27.
[0075]
When it is determined in step S23 that the difference average value is equal to or less than the
threshold value in all the regions, the determination unit 53 advances the process to step S26. In
step S26, the audio output control unit 54 supplies audio data to all the speakers and causes all
the speakers to output audio. When the process of step S26 ends, the audio output control unit
54 proceeds with the process to step S27.
[0076]
In step S27, the voice output control unit 54 determines whether the voice output position
control process is to be ended. If it is determined not to be finished, the process returns to step
S22, and the subsequent processes are repeated. If it is determined in step S27 that the voice
output position control process is to be ended, the voice output control unit 54 advances the
process to step S28 to perform an end process, and ends the voice output position control
process. The process returns to step S4 of step S5 to execute the processes of step S5 and
subsequent steps.
09-05-2019
20
[0077]
Next, an example of a detailed flow of the feature pixel difference average value calculation
process executed in step S22 of FIG. 8 will be described with reference to the flowchart of FIG.
[0078]
In step S41, the variable management unit 74 initializes variables such as the in-area feature
pixel count 81 and the in-area difference value sum 82.
In step S42, the pixel value conversion unit 71 converts pixel values. In step S43, the feature
pixel extraction unit 73 determines whether the pixel whose pixel value has been converted by
the pixel value conversion unit 71 is a feature pixel having a feature of the image of a person,
and determines that it is a feature pixel. If yes, the process proceeds to step S44.
[0079]
In step S 44, the variable management unit 74 adds +1 to the in-area feature pixel 81. In
step S45, the difference calculating unit 75 calculates the difference between the previous frame
and the luminance value. In step S46, the variable management unit 74 adds the difference value
to the in-area difference value sum 82. When the process of step S46 ends, the variable
managing unit 74 advances the process to step S47. When it is determined in step S43 that the
feature pixel is not the feature pixel, the feature pixel extraction unit 73 proceeds with the
process to step S47.
[0080]
In step S47, the determination unit 76 determines whether or not all the pixels in the region have
been processed. If it is determined that the processing has not been performed, the process
returns to step S42, and the subsequent processing is performed for the next pixel. Let it repeat.
If it is determined in step S47 that all the pixels in the area have been processed, the
determination unit 76 advances the process to step S48.
09-05-2019
21
[0081]
In step S <b> 48, the difference average value calculation unit 77 calculates the difference
average value in the region based on the in-region feature pixel count 81 and the value of the inregion difference value sum 82. The difference average value storage unit 78 stores the
calculated difference average value in step S49, and determines in step S50 whether or not
processing has been completed for all areas, and determines that there is an unprocessed area.
The processing is returned to step S41, and the subsequent processing is repeated for the next
area.
[0082]
When it is determined in step S50 that the entire area has been processed, the difference average
storage unit 78 ends the feature pixel difference average calculation process, returns the process
to step S22 in FIG. 8, and the processes after step S23. Run
[0083]
The display device 10 identifies the person 21 to the person 23 from the feature of the pixel
value by analyzing the display image 20 with respect to the display image 20 as shown in FIG. In
addition, the difference average value between the frames is used to specify who is the person 21
to the person 23 speaking, for example, the voice of the person 21 is output from the speaker 11
and the voice of the person 22 is output from the speaker 12 The output position of the sound is
controlled according to the contents of the display image 20 so that the sound of the person 23
is output from the speaker 13.
[0084]
By doing this, the display device 10 can output the sound corresponding to the image to be
displayed so as to further enhance the sense of reality.
[0085]
The number of speakers is arbitrary, and the arrangement thereof is also arbitrary.
Also, the area and the speaker may not be associated on a one-to-one basis, one speaker may
correspond to a plurality of areas, or a plurality of speakers may correspond to one area. Good.
09-05-2019
22
For example, as shown in FIG. 10A, the display device 10 may have fourteen speakers (speakers
91A to 91P) so as to surround the display image 20.
In that case, as shown in FIG. 10B, the display image 20 is divided into a total of 12 regions
(regions 101 to 112): three in the vertical direction and four in the horizontal direction.
[0086]
At this time, for example, a speaker 91A and a speaker 91P correspond to the area 101, a
speaker 91B corresponds to the area 102, a speaker 91C corresponds to the area 103, and a
speaker 91D and a speaker 91E correspond to the area 104. The speaker 91N corresponds to
the area 105, the speaker 91B, the speaker 91F, the speaker 91K, and the speaker 91N
correspond to the area 106, and the speaker 91C, the speaker 91F, the speaker 91J, and the
speaker 91N correspond to the area 107. A speaker 91 F corresponds to the area 108, a speaker
91 L and a speaker 91 M correspond to the area 109, a speaker 91 K corresponds to the area
110, a speaker 91 J corresponds to the area 111, and a speaker 91 G corresponds to the area
112. And the speaker 91H correspond To. Of course, it may be made to correspond by other
methods.
[0087]
The present invention may be anything as long as it displays an image and outputs a sound
corresponding to the image. For example, it may be a system using a projector as shown in FIG.
In the case of FIG. 11, the projector 122 exists on the back side of the screen 121, and the
projector 122 projects an image on the back side of the screen 121. At the front side of the
screen 121, speakers 131 to 142 are arranged side by side toward the rear side, and sound
corresponding to the image 123 projected by the projector 122 is output from these speakers.
That is, the speakers 131 to 142 output sound from the back side of the image 123 projected on
the screen 122 by the projector 122.
[0088]
09-05-2019
23
In such a system, as described above, by selecting and outputting sound from the speakers 131
to 142 according to the contents of the projected image 123, for example, from the speaker
behind the person who is angry, The voice of the person can be output. Therefore, the system of
FIG. 11 can output the sound corresponding to the image to be displayed in a more realistic
manner.
[0089]
In addition, you may make it control the sound volume output from each speaker instead of
selecting the speaker which outputs an audio ¦ voice. That is, instead of controlling whether or
not audio is output from each speaker, it is mainly output from the speaker corresponding to the
selected area as described above, and is output from the speaker corresponding to the other
areas. The volume of the sound may be reduced.
[0090]
Also, the number of channels of audio data may be more than one. For example, as in the case of
BGM, the output position of the audio can not be specified in the display image 20; Voice data
whose position is identifiable may be divided into different channels and may be processed.
[0091]
FIG. 12 is a block diagram showing a configuration example of a display device in such a case.
12, the display device 150 basically has the same configuration as the display device 10.
However, unlike the audio processing unit 36, the audio processing unit 151 divides the audio of
each channel of the input audio data Output. The voice whose voice output position can not be
specified like BGM is supplied directly to the voice output unit 152, and the voice output position
control unit 37 processes only voice whose voice output position can be identified like human
voice. . The audio output unit 152 acquires audio data of each channel, synthesizes each channel
for each speaker, and outputs audio.
[0092]
09-05-2019
24
As described above, the display device may control the output position of only the sound of a
part of the channels. The sound of each channel may be output position controlled independently
of each other.
[0093]
Furthermore, the display device may, of course, control the output positions of the audio data of
the left and right two channels as in the stereo system. In that case, as shown in FIG. 13, the left
and right two-channel speakers may be processed as one set and processed in the same manner
as the monaural sound data described above.
[0094]
For example, in the case of FIG. 13, the display device 160 includes three stereo speakers (stereo
speaker 161 to stereo speaker 163) arranged to be aligned in the horizontal direction. The stereo
speaker 161 has a left speaker 161A and a right speaker 161B, and can output left and right
two-channel audio. Similarly, the stereo speaker 162 also has a left speaker 162A and a right
speaker 162B, and can output left and right two-channel audio. Similarly, the stereo speaker 163
also has a left speaker 163A and a right speaker 163B, and can output left and right two-channel
audio.
[0095]
Similar to the monaural system, the display device 160 selects a stereo speaker for outputting
the left and right two-channel audio based on the display image 20 from the stereo speakers 161
to the stereo speakers 163, and selects the selected stereo speaker ( Audio from the left and right
channels from the left and right speakers). At this time, the control of the output position
performed based on the image can also be corrected using the volume difference between the
left and right channels of the stereo sound.
[0096]
When the output position of the sound is not controlled, the display device 160 uses the stereo
09-05-2019
25
speaker 161 to the stereo speaker 163 as one stereo speaker and outputs stereo sound from one
direction. For example, the audio output from the stereo speaker 162 is stopped, and the audio of
the left channel is output from the stereo speaker 161 (both the left speaker 161A and the right
speaker 161B), and the stereo speaker 163 (both the left speaker 163A and the right speaker
163B) Output the sound of the right channel more.
[0097]
An example of the flow of such an audio output position control process will be described with
reference to the flowchart of FIG.
[0098]
When the voice output position control process is started, the screen area division unit 51
divides the screen area into a plurality of areas corresponding to the speaker arrangement in
step S71.
In step S72, the difference average value calculation unit 52 calculates, in each of the divided
regions, the difference average value of the pixels (feature pixels) exhibiting a predetermined
feature, as described with reference to the flowchart of FIG.
[0099]
In step S73, the determination unit 53 determines whether the difference average value is less
than or equal to a predetermined threshold value in all areas, and determines that at least one
difference average value is greater than or equal to the threshold value. The process then
proceeds to step S74. In step S74, the audio output control unit 54 controls the output of audio
based on the difference average value and the audio data. When the process of step S74 ends,
the audio output control unit 54 advances the process to step S76.
[0100]
When it is determined in step S73 that the difference average value is equal to or less than the
threshold value in all the regions, the determination unit 53 advances the process to step S75. In
09-05-2019
26
step S75, the audio output control unit 54 controls all the speakers as stereo speakers to output
audio. When the process of step S75 ends, the audio output control unit 54 proceeds with the
process to step S76.
[0101]
In step S76, the audio output control unit 54 determines whether or not the audio output
position control process is to be ended. If it is determined that the process is not ended, the
process returns to step S72 to repeat the subsequent processes. If it is determined in step S76
that the audio output position control process is to be ended, the audio output control unit 54
advances the process to step S77 to perform an end process, and ends the audio output position
control process. The process returns to step S4 of step S5 to execute the processes of step S5 and
subsequent steps.
[0102]
By doing as described above, the display device can output the sound corresponding to the image
to be displayed so as to further enhance the sense of reality, even when the sound data has
multiple channels.
[0103]
The audio output control unit 54 may correct the control of the output position performed based
on the image by correcting the difference average value using the volume difference between the
left and right channels of stereo sound.
A detailed configuration example of the audio output control unit 54 in that case is shown in FIG.
[0104]
In FIG. 15, the audio output control unit 54 includes a sound volume confirmation unit 171, a
difference average value correction unit 172, an area specification unit 173, and an output
control unit 174.
09-05-2019
27
[0105]
The sound volume confirmation unit 171 confirms the sound volume difference between the left
and right channels of the audio data, mainly confirms in which direction of the left and right
direction the sound is output, and supplies the confirmation result to the difference average
value correction unit 172.
The difference average value correction unit 172 corrects the value of the difference average
value of each area at a predetermined ratio based on the confirmation result, and supplies the
corrected difference average value to the area identification unit 173. The area specifying unit
173 specifies an area in which the sound is output based on the corrected difference average
value, and supplies the specified result to the output control unit 174. The output control unit
174 supplies the audio data to the audio output unit 39 so that the audio is output from the
speaker corresponding to the area where the audio is output based on the specification result.
[0106]
Next, an example of the flow of the audio output control process executed by the audio output
control unit 54 in step S74 of FIG. 14 will be described with reference to the flowchart of FIG.
[0107]
First, in step S91, the voice check unit 171 specifies a voice output area, which is an area in
which voice is output, based on the volume of each channel of voice data.
In step S92, the difference average value correction unit 172 corrects the difference average
value of each area based on the audio output area identified in step S91. For example, the
difference average value correction unit 172 increases the difference average value of the audio
output area by 10%, decreases the difference average value of the other areas by 10%, etc., and
the difference average value of the audio output area increases. To correct.
[0108]
In step S93, the area specifying unit 93 specifies an area having the largest difference average
09-05-2019
28
value after the correction. In step S94, the output control unit 174 controls the output so as to
output the sound from the speaker corresponding to the area where the difference average value
is the largest. When the process of step S94 ends, the audio output control unit 54 ends the
audio output control process, returns the process to step S74 of FIG. 14, and executes the
processes of step S76 and subsequent steps.
[0109]
The voice output control may be corrected using voice data by a method other than this. For
example, in the case of controlling the volume of the sound output from each speaker instead of
controlling the switching of the speaker outputting the sound based on the content of the display
image, the display device uses the content of the display image. The volume of each speaker
determined based on the above may be corrected based on the volume of the left and right
channels of audio data.
[0110]
A detailed configuration example of the audio output control unit 54 in that case is shown in FIG.
[0111]
In FIG. 17, the audio output control unit 54 includes an area specification unit 181, a volume
adjustment unit 182, a volume confirmation unit 183, a comparison unit 184, an adjustment
amount correction unit 185, and an output control unit 186.
[0112]
The area specifying unit 181 specifies the area having the largest difference average value, and
supplies the result to the volume adjusting unit 182 and the comparing unit 184.
The sound volume adjustment unit 182 generates control information for adjusting the sound
volume output from the speakers corresponding to the respective regions based on the result,
and supplies the control information to the adjustment amount correction unit 185.
09-05-2019
29
The volume check unit 183 specifies an area in the display image where the sound is output,
based on the volume difference between the left and right channels of the sound data, and
supplies the result to the comparison unit 184. The comparison unit 184 compares the
information supplied from the area specifying unit 181 with the information supplied from the
volume confirmation unit 183, determines whether the areas designated by both match each
other, and the determination result is an adjustment amount The correction unit 185 is supplied.
[0113]
If the adjustment amount correction unit 185 determines that the area having the largest
differential average value specified by the area specification unit 181 matches the area where the
sound specified by the volume check unit 183 is output, based on the determination result, the
sound Since it is estimated that the deviation of the output position is strong and the specific
accuracy of the area is high, the control information generated by the volume adjustment unit
182 is corrected, and the volume difference between the regions (volume difference of each
speaker) Re-adjust to get bigger. On the contrary, when the area where the difference average
value specified by the area specifying unit 181 is the largest and the area where the sound
specified by the sound volume confirmation unit 183 does not match, the deviation of the sound
output position is weak and Therefore, the adjustment amount correction unit 185 corrects the
control information generated by the volume adjustment unit 182 so that the volume difference
between the regions (volume difference of each speaker) becomes smaller. Re-adjust.
[0114]
The adjustment amount correction unit 185 supplies the output control unit 186 with control
information obtained by correcting the adjustment amount. The output control unit 186 controls
the volume of the sound data of each speaker based on the supplied control information.
[0115]
Next, an example of the flow of the audio output control process executed by the audio output
control unit 54 in step S74 of FIG. 14 will be described with reference to the flowchart of FIG.
[0116]
09-05-2019
30
In step S111, the area specifying unit 181 refers to the supplied difference average value for
each area to specify the area having the largest difference average value.
In step S112, the volume adjustment unit 182 generates control information for adjusting the
output sound so that the volume difference of the output sound occurs among the speakers
based on the specification result of the process of step S111.
[0117]
In step S113, the volume check unit 183 specifies an audio output area based on the volume of
each channel of audio data. In step S114, the comparison unit 184 compares the area having the
largest difference average value identified in the process of step S111 with the audio output area
identified in the process of step S113.
[0118]
In step S115, the comparison unit 184 determines whether the two areas match or not. If it is
determined that the two areas match, the process proceeds to step S116. In step S116, the
adjustment amount correction unit 185 corrects the adjustment amount so that the volume
difference is larger than the default value set in step S112. When the process of step S116 ends,
the adjustment amount correction unit 185 advances the process to step S118.
[0119]
If it is determined in step S115 that the two areas do not match, the comparison unit 184
advances the process to step S117. In step S117, the adjustment amount correction unit 185
corrects the adjustment amount so that the volume difference is smaller than the default value
set in step S112. When the process of step S117 ends, the adjustment amount correcting unit
185 advances the process to step S118.
[0120]
09-05-2019
31
In step S118, the output control unit 186 adjusts the volume of the output sound of each speaker
based on the control information for which the adjustment amount has been corrected, and
outputs the sound. When the process of step S118 ends, the audio output control unit 54 ends
the audio output control process, returns the process to step S74 of FIG. 14, and executes the
processes of step S76 and subsequent steps.
[0121]
As described above, by correcting the output control of the audio determined by analyzing the
display image using the analysis of the audio data, the display device can perform the output
control of the audio more accurately. Audio can be output to enhance the sense of reality.
[0122]
The display device may not only switch the speaker for outputting the sound, but may control the
sound output position in a pseudo manner by the sound processing.
By doing this, the display device can make the user feel that sound is being output from an
arbitrary position, without limiting the output position of the sound to the arrangement location
and the number of speakers.
[0123]
In addition, the output position is virtually changed by controlling the output timing (delay
amount) of the sound of each speaker according to the position of the user who views the
content with respect to the display device. Can be changed).
[0124]
FIG. 19 is a block diagram showing a configuration example of a display device in that case.
[0125]
In FIG. 19, the display device 200 includes a user position handling processing unit 201 in
addition to the configuration of the display device 10.
09-05-2019
32
The user position correspondence processing unit 201 is supplied with position information of
the remote commander, which is output information of a position sensor provided in the remote
commander, supplied together with the user instruction command from the remote commander
via the user instruction accepting unit 42. Ru.
The user position correspondence processing unit 201 detects a position based on the display
device 200 of the remote commander based on the position information, and sets the position as
the user position. Then, the user position correspondence processing unit 201 calculates the
delay amount of the audio output of each speaker based on the user position, and delays the
audio data supplied from the audio output position control unit 37 by the delay amount. Then,
the signal is supplied to the audio output unit 39 and output as audio. That is, the user position
handling processing unit 201 delays the voice output based on the user position (the relative
position of the user from the display device).
[0126]
FIG. 20 is a block diagram showing a detailed configuration example of the user position
correspondence processing unit 201. As shown in FIG. In FIG. 20, the user position
correspondence processing unit 201 includes a remote commander position detection unit 211,
a delay amount calculation unit 212, and a delay amount control unit 213.
[0127]
The remote commander position detection unit 211 detects the position of the remote
commander as the user position based on the output information of the position sensor supplied
from the remote commander, and supplies the information to the delay amount calculation unit
212. The delay amount calculation unit 212 calculates the delay amount of the audio output of
each speaker based on the information of the user position supplied from the remote commander
position detection unit 211, and supplies it to the delay control unit 213. The delay control unit
213 delays the output timing of the audio data for each speaker supplied from the audio output
position control unit 37 by the delay amount and then supplies it to the audio output unit 39 to
output the audio.
[0128]
09-05-2019
33
Next, as an example of the flow of this specific process, an example of the flow of the user
position correspondence process executed by the user position correspondence processing unit
201 will be described with reference to the flowchart in FIG.
[0129]
In step S131, the remote commander position detection unit 211 detects the position of the
remote commander to detect the user position.
In step S132, the delay amount calculation unit 212 calculates the delay amount of the audio
output of each speaker based on the user position. In step S133, the delay control unit 213
delays the audio output of each speaker by the calculated delay amount. In step S134, the user
position handling processing unit 201 determines whether or not the user position handling
processing is to be ended. If it is determined that the user position handling processing is not to
be ended, the process returns to step S131 to repeat the subsequent processing. If it is
determined in step S134 that the user position handling process is to end, the user position
handling processing unit 201 ends the user position handling process.
[0130]
As described above, by controlling the timing of audio output in accordance with the position of
the user, the display device can make the user feel stronger as if audio is being output from the
corresponding position of the display image. Audio can be output to enhance the sense of reality.
[0131]
In addition, the speaker included in the display device may be detachable from the display device.
FIG. 22 is a perspective view showing an example of a display device to which a speaker can be
attached and detached.
[0132]
09-05-2019
34
In FIG. 22, the display device 231 is provided with a plurality of recessed portions for mounting a
speaker on the four side surfaces of upper, lower, left, and right when the image display surface
is the front. The speaker 233A and the speaker 233B are speakers attachable to and detachable
from the display device 231. The speaker 233A and the speaker 233B are referred to as the
speaker 233 when it is not necessary to distinguish them from each other. In the speaker 233, a
convex portion 234 is provided as shown by the speaker 233B. The convex portion 234
corresponds to the concave portion 232 of the display device 231, and the user mounts the
speaker 233 on the display device 231 so as to fit the convex portion 234 into the concave
portion 232 of the display device 231. The speaker 233 can be fixed to the side surface of the
display device 231.
[0133]
In addition, the user can separate the speaker 233 fixed to the display device 231 from the
display device 231 by pulling the speaker 233 away from the display device 231.
[0134]
Note that electrodes are provided at corresponding positions in the concave portion 232 and the
convex portion 234, and in a state where the speaker 233 is fixed to the display device 231, the
internal circuit of the display device 231 and the speaker 233 are electrically connected. The
speaker 233 acquires an audio signal output from the display device 231, and can output audio
corresponding to the audio signal.
[0135]
Furthermore, the plurality of recesses 232 provided in the display device 231 have the same
shape, and the speaker 233 can be attached to any recess 232.
That is, the speakers 233 can be attached to the display device 231 by the number of the
recessed portions 232.
[0136]
Further, as shown in FIG. 23, the display device 231 displays a menu screen for guiding the input
09-05-2019
35
of the setting of the arrangement of the speakers.
An input guide image 241 is displayed on the display device 231 shown in FIG. The user inputs
the arrangement of the actually installed speakers according to the input guide image 241, or
selects a pattern closest to the actual arrangement from among the prepared patterns. The
display device 231 determines the division method of the area of the display image based on the
input information on the arrangement of the speakers, stores the setting, and uses the setting for
audio output position control.
[0137]
FIG. 24 is a block diagram showing an example of the internal configuration of the display device
231 in that case. In FIG. 24, the display device 231 has basically the same configuration as the
display device 10, but further has an area setting unit 251 in addition to the configuration of the
display device 10.
[0138]
The area setting unit 251 supplies the image data of the input guide image 241 to the image
processing unit 35 and causes the display unit 38 to display the image data. The user operates
the remote commander based on the input guide image 241 to input information on the
arrangement of the speakers. When the user instruction accepting unit 42 acquires the user
instruction, it supplies the region instruction unit 251. The area setting unit 251 sets the area
based on the information on the speaker arrangement input by the user, and supplies the setting
information to the voice output position control unit 37. The voice output position control unit
37 divides the display image into a plurality of areas corresponding to the speaker arrangement
based on the setting.
[0139]
FIG. 25 is a block diagram showing a detailed configuration example of the area setting unit 251.
As shown in FIG. In FIG. 25, the area setting unit 251 includes an input guidance image display
control unit 261, a user input reception processing unit 262, an area setting selection unit 263,
and a division table storage unit 264.
09-05-2019
36
[0140]
The input guidance image display control unit 261 supplies the input guidance image 241 to the
image processing unit 35. When the user input reception processing unit 262 acquires the user
input input according to the input guidance image 241 via the user instruction receiving unit 42,
the user input reception processing unit 262 extracts speaker position information which is
information related to the arrangement of speakers from the user input, It is supplied to the area
setting selection unit 263. The area setting selection unit 263 stores the division pattern of the
area corresponding to the supplied speaker position information based on the division table that
associates the arrangement pattern of the speakers with the division pattern of the area stored in
the division table storage unit 264. It selects and supplies it to the audio ¦ voice output position
control part 37 as area ¦ region setting.
[0141]
An example of the flow of a specific process of the area setting process executed by the area
setting unit 251 will be described with reference to the flowchart of FIG.
[0142]
In step S151, the input guide image display control unit 261 causes the display unit 38 to display
the input guide image 241.
In step S152, the user input acceptance processor 262 accepts user input. In step S153, the user
input acceptance processing unit 262 determines whether or not the user input is accepted, and
returns the process to step S152 until it is determined that the user input is accepted, and
repeatedly executes the subsequent processes. If it is determined in step S153 that the user input
has been received, the user input acceptance processing unit 262 advances the process to step
S154. In step S154, the area setting selection unit 263 selects an optimum area setting based on
the speaker position and the division table. When the process of step S154 ends, the area setting
unit 251 ends the area setting process.
[0143]
By setting the area as described above, the display device can output the sound corresponding to
09-05-2019
37
the image to be displayed so as to further enhance the sense of reality, even if an arbitrary
number of speakers are arranged at an arbitrary position. it can.
[0144]
Note that by confirming the connection status of the speaker connection terminals provided in
each of the concave portions 232 of the display device 231, the display device 231 automatically
arranges the speakers 233 without user input as shown in FIG. It may be possible to grasp it.
[0145]
FIG. 27 is a block diagram showing an example of the internal configuration of the display device
231 in such a case.
[0146]
In FIG. 27, the display device 231 basically has the same configuration as the display device 10,
but additionally has a region setting unit 301 in addition to the configuration of the display
device 10.
[0147]
The area setting unit 301 acquires connection information indicating that the speaker 233 is
connected from the audio output unit 39.
For example, the area setting unit 301 transmits a predetermined signal to the speaker
connection terminal provided in each recess 232 or measures the voltage of the speaker
connection terminal, and the connection state of the speaker based on the response signal, the
voltage, etc. Confirm.
Then, the area setting unit 301 sets the area based on the detected arrangement of the speakers,
and supplies information of the area setting to the audio output position control unit 37.
The voice output position control unit 37 divides the display image into a plurality of areas
corresponding to the speaker arrangement based on the setting.
09-05-2019
38
[0148]
FIG. 28 is a block diagram showing a detailed configuration example of the region setting unit
301. As shown in FIG.
In FIG. 28, the area setting unit 301 includes a connection confirmation unit 311, a speaker
position storage unit 312, an area setting selection unit 313, and a division table storage unit
314.
[0149]
The connection confirmation unit 311 acquires connection information from each of the speaker
connection terminals, and confirms the connection state of the speakers. When the connection
check unit 311 detects a speaker, the connection check unit 311 supplies speaker position
information indicating the position to the speaker position storage unit 312. The speaker
position storage unit 312 stores all the positions of the detected speakers, and supplies the
position information to the area setting selection unit 313 as necessary.
[0150]
When the connection confirmation unit 311 confirms the connection of all the speaker
connection terminals, the area setting selection unit 313 acquires speaker position information
indicating the detected position of the speaker from the speaker position storage unit 312, and
the division table storage unit 314 Thus, a division table that associates the arrangement pattern
of the speakers with the division pattern of the area is acquired. The area setting selection unit
313 uses this division table to select a division pattern corresponding to the arrangement of the
speakers, and supplies it to the audio output position control unit 37 as area setting.
[0151]
The flow of a specific process of the area setting process executed by the area setting unit 301
will be described with reference to the flowchart of FIG.
09-05-2019
39
[0152]
In step S171, the connection check unit 311 selects an unprocessed speaker connection terminal.
In step S172, the connection check unit 311 checks the speaker connection for the selected
speaker connection terminal. In step S173, the connection check unit 311 determines whether a
speaker is detected. If it is determined that a speaker is detected, the process proceeds to step
S174. In step S174, the speaker position storage unit 312 stores the detected position of the
speaker, and the process proceeds to step S175. If it is determined in step S173 that no speaker
has been detected, the connection check unit 311 omits the process of step S174 and advances
the process to step S175. In step S175, the connection check unit 311 determines whether all
the speaker connection terminals have been checked. If it is determined that there is an
unprocessed speaker connection terminal, the process returns to step S171, and the subsequent
processes are performed. repeat. If it is determined in step S175 that all connection terminals
have been confirmed, the connection confirmation unit 311 advances the process to step S176.
In step S176, the area setting selection unit 313 selects the area setting based on the speaker
position and the division table and supplies the selected area setting to the audio output position
control unit 37, and then the area setting process ends.
[0153]
As described above, the display device detects the speakers and sets the area, whereby the user
can more easily set the area. That is, the display device can output the sound corresponding to
the image to be displayed so as to further enhance the sense of reality without complicated
operations by the user.
[0154]
In addition, in order to detect the connected speaker, a sensor or a switch that detects that the
speaker 233 is connected to each of the concave portions 232 of the display device 231 may be
provided. In that case, the area setting unit 301 acquires output information from those sensors
and switches, and detects a speaker.
09-05-2019
40
[0155]
The series of processes described above can be performed by hardware or software. In this case,
for example, the audio output position control unit 37, the user position handling processing unit
201, the area setting unit 251, or the area setting unit 301 may be configured as a personal
computer as shown in FIG.
[0156]
Referring to FIG. 30, a central processing unit (CPU) 401 of a personal computer 400 executes
various programs according to a program stored in a read only memory (ROM) 402 or a program
loaded from a storage unit 413 to a random access memory (RAM) 403. Execute the process of
The RAM 403 also stores data necessary for the CPU 401 to execute various processes.
[0157]
The CPU 401, the ROM 402 and the RAM 403 are connected to one another via a bus 404. An
input / output interface 410 is also connected to the bus 404.
[0158]
The input / output interface 410 includes an input unit 411 such as a keyboard and a mouse, a
display such as a CRT (Cathode Ray Tube) and an LCD (Liquid Crystal Display), an output unit
412 such as a speaker, and a hard disk. A communication unit 414 including a storage unit 413
and a modem is connected. The communication unit 414 performs communication processing
via a network including the Internet.
[0159]
Also, a drive 415 is connected to the input / output interface 410 as necessary, and removable
media 421 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor
memory are appropriately attached, and a computer program read from them is It is installed in
the storage unit 413 as necessary.
09-05-2019
41
[0160]
When the above-described series of processes are executed by software, a program that
configures the software is installed from a network or a recording medium.
[0161]
For example, as shown in FIG. 30, this recording medium is a magnetic disk (including a flexible
disk) on which a program is recorded, which is distributed for distributing the program to the
user separately from the apparatus main body, an optical disk ( By removable media 421
comprising a CD-ROM (Compact Disk-Read Only Memory), a DVD (Digital Versatile Disk), a
magneto-optical disk (MD (Mini-Disk) (including registered trademark)), a semiconductor
memory, etc. Not only the configuration, but also the ROM 402 in which the program is recorded
and delivered to the user in a state of being incorporated in the device main body, the hard disk
included in the storage unit 413, and the like.
[0162]
In the present specification, in the step of describing the program recorded on the recording
medium, processing performed in time series according to the described order is, of course,
parallel or not necessarily processed in time series. It also includes processing to be executed
individually.
[0163]
Further, in the present specification, the system represents the entire apparatus configured by a
plurality of devices (apparatus).
[0164]
The embodiment of the present invention is not limited to the above-described embodiment, and
various modifications can be made without departing from the scope of the present invention.
[0165]
It is a perspective view showing a conventional television receiver.
It is a figure which shows the structural example of the external appearance of the display
09-05-2019
42
apparatus to which this invention is applied.
It is a block diagram which shows the structural example of the inside of the display apparatus of
FIG.
It is a block diagram which shows the detailed structural example of the audio ¦ voice output
position control part of FIG.
It is a schematic diagram which shows the example of area ¦ region division ¦ segmentation.
It is a block diagram which shows the detailed structural example of the difference average value
calculation part of FIG.
It is a flowchart explaining the example of the flow of image display processing. It is a flowchart
explaining the example of the flow of voice output position control processing. It is a flowchart
explaining the example of the flow of feature pixel difference average value calculation
processing. It is a figure which shows the other structural example of the external appearance of
a display apparatus. It is a figure showing an example of composition of a projection system to
which the present invention is applied. It is a block diagram which shows the other structural
example of a display apparatus. It is a figure which shows the further another structural example
of the external appearance of a display apparatus. It is a flowchart explaining the other example
of the flow of audio ¦ voice output position control processing. It is a block diagram which shows
the detailed structural example of an audio ¦ voice output control part. It is a flowchart
explaining the example of the flow of voice output control processing. It is a block diagram which
shows the detailed other example of a structure of an audio ¦ voice output control part. It is a
flowchart explaining the other example of the flow of audio ¦ voice output control processing. It
is a block diagram which shows the further another structural example of a display apparatus. It
is a block diagram which shows the detailed structural example of the user position
corresponding ¦ compatible process part of FIG. It is a flowchart explaining the example of the
flow of user position corresponding processing. It is a figure which shows the further another
structural example of the external appearance of a display apparatus. It is a schematic diagram
which shows the example of a display of an input guidance image. It is a block diagram which
shows the further another structural example of a display apparatus. It is a block diagram which
shows the detailed structural example of the area ¦ region setting part of FIG. It is a flow chart
explaining an example of a flow of field setting processing. It is a block diagram which shows the
further another structural example of a display apparatus. It is a block diagram which shows the
09-05-2019
43
detailed structural example of the area ¦ region setting part of FIG. It is a flowchart explaining the
other example of the flow of field setting processing. It is a figure showing an example of
composition of a personal computer to which one embodiment of the present invention is
applied.
Explanation of sign
[0166]
DESCRIPTION OF SYMBOLS 10 display apparatus, 11 thru ¦ or 13 speaker, 20 display image, 21
thru ¦ or 23 person, 37 audio ¦ voice output position control part, 39 audio ¦ voice output part,
51 screen area division part, 52 difference average value calculation part, 53 audio ¦ voice
output control part, 73 Feature pixel extraction unit, 74 variable management unit, 75 difference
calculation unit, 77 difference average value calculation unit, 78 difference average value storage
unit, 81 number of in-area feature pixels, 82 in-area difference value sum, 171 volume check
unit, 172 difference Average value correction unit, 173 area specification unit, 174 output
control unit, 181 area specification unit, 182 volume adjustment unit, 183 volume confirmation
unit, 184 comparison unit, 185 adjustment amount correction unit, 186 output control unit, 201
user position corresponding processing , 211 remote commander position detection unit, 212
delay amount calculation unit, 213 delay control unit, 251 area setting unit, 261 input proposal
Image display control unit, 262 user input acceptance processing unit, 263 area setting selection
unit, 264 division table storage unit, 301 area setting unit, 311 connection confirmation unit,
312 speaker position storage unit, 313 area setting selection unit, 314 division table storage
Department
09-05-2019
44
1/--страниц
Пожаловаться на содержимое документа