Leonardo Facchin
2014-10-07 10:18:55 UTC
Hello,
I'm trying to calibrate and profile a Dell U2410 external display
connected through an HDMI port, in extended desktop configuration, to my
Asus A56CB-XX175H laptop running MS Windows 8.1 64 bits.
The laptop graphics system makes use of the nVidia Optimus technology.
Basically, the laptop has an Integrated Intel HD 4000 Processing Unit
and a Discrete nVidia GT 740M GPU.
According to the Optimus Technology White Paper, the Discrete GPU is NOT
connected to any kind of display output. The default behaviour is for
the GPU to be powered down into an idle state and for the IGP to handle
the graphics. When the Operating System together with the nVidia drivers
detects a graphics workload that could benefit from the faster Dedicated
GPU, the nVidia card is woken up and handles all the processing. Once
the frame has been calculated, the data are copied to the Integrated
Processing Unit buffer and the IGP acts as a video display controller
and sends the output to the monitor.
The default behaviour is for the system to act transparently, without
the user ever knowing which card is handling the workload at any given
time, but the user can override the default behaviour by forcing an
association between any executable and the preferred graphics card.
My intention was to force Photoshop and Bridge to make use of the nVidia
GPU, since it looks like Adobe implemented a few features that actually
benefit from the Discrete GPU computing power.
The problem is that right now I'm totally lost and confused about how
the "two" graphics cards actually interact with the calibration and
profiling process. Let's assume that I could force both the relevant
Adobe softwares to run with the nVidia GPU (which I did) and the Argyll
CMS components to do the same (which I didn't: even if I associate the
Argyll executables to the nVidia GPU, when I run them the GPU remains
idle, at least according to the nVidia Optimus GPU State Viewer, a
little software meant to show when the GPU is powered up and running).
Still, how can I know how many LUTs the system can access and which
graphics card control each of them? Could the two cards be sharing the
LUTs? Does it even make sense? Sorry if the questions look stupid, but I
really have no idea how a graphics card works in detail and all the
sources I could access are either way too generic to be useful or too
detailed and technical for me to be able to parse the informations.
When I access the Windows Color Management Control Panel the two
displays are listed as follows:
1. Generic PnP Monitor - Intel(R) HD Graphics 4000
2. Dell U2410(HDMI) - Intel(R) HD Graphics 4000
which might be coherent with the info from the White Paper about the
nVidia Card not being connected directly to any video output but only to
the Intel IGP.
I know that the easiest solutions would probably be to stick to the
Integrated Processing Unit and be done with it, but on the one hand the
total confusion is undermining my trust in the accuracy of my color
managed workflow and on the other I just developed the curiosity to
understand a little better how this system works.
Any help, including pointing me to relevant documentation, is very
welcome. Thanks in advance,
Leonardo Facchin
I'm trying to calibrate and profile a Dell U2410 external display
connected through an HDMI port, in extended desktop configuration, to my
Asus A56CB-XX175H laptop running MS Windows 8.1 64 bits.
The laptop graphics system makes use of the nVidia Optimus technology.
Basically, the laptop has an Integrated Intel HD 4000 Processing Unit
and a Discrete nVidia GT 740M GPU.
According to the Optimus Technology White Paper, the Discrete GPU is NOT
connected to any kind of display output. The default behaviour is for
the GPU to be powered down into an idle state and for the IGP to handle
the graphics. When the Operating System together with the nVidia drivers
detects a graphics workload that could benefit from the faster Dedicated
GPU, the nVidia card is woken up and handles all the processing. Once
the frame has been calculated, the data are copied to the Integrated
Processing Unit buffer and the IGP acts as a video display controller
and sends the output to the monitor.
The default behaviour is for the system to act transparently, without
the user ever knowing which card is handling the workload at any given
time, but the user can override the default behaviour by forcing an
association between any executable and the preferred graphics card.
My intention was to force Photoshop and Bridge to make use of the nVidia
GPU, since it looks like Adobe implemented a few features that actually
benefit from the Discrete GPU computing power.
The problem is that right now I'm totally lost and confused about how
the "two" graphics cards actually interact with the calibration and
profiling process. Let's assume that I could force both the relevant
Adobe softwares to run with the nVidia GPU (which I did) and the Argyll
CMS components to do the same (which I didn't: even if I associate the
Argyll executables to the nVidia GPU, when I run them the GPU remains
idle, at least according to the nVidia Optimus GPU State Viewer, a
little software meant to show when the GPU is powered up and running).
Still, how can I know how many LUTs the system can access and which
graphics card control each of them? Could the two cards be sharing the
LUTs? Does it even make sense? Sorry if the questions look stupid, but I
really have no idea how a graphics card works in detail and all the
sources I could access are either way too generic to be useful or too
detailed and technical for me to be able to parse the informations.
When I access the Windows Color Management Control Panel the two
displays are listed as follows:
1. Generic PnP Monitor - Intel(R) HD Graphics 4000
2. Dell U2410(HDMI) - Intel(R) HD Graphics 4000
which might be coherent with the info from the White Paper about the
nVidia Card not being connected directly to any video output but only to
the Intel IGP.
I know that the easiest solutions would probably be to stick to the
Integrated Processing Unit and be done with it, but on the one hand the
total confusion is undermining my trust in the accuracy of my color
managed workflow and on the other I just developed the curiosity to
understand a little better how this system works.
Any help, including pointing me to relevant documentation, is very
welcome. Thanks in advance,
Leonardo Facchin
--
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>
La mia galleria fotografica su Flickr
<http://www.flickr.com/photos/leonardo_facchin/>