aboutsummaryrefslogtreecommitdiff
path: root/en/devices/camera/camera3_crop_reprocess.html
diff options
context:
space:
mode:
Diffstat (limited to 'en/devices/camera/camera3_crop_reprocess.html')
-rw-r--r--en/devices/camera/camera3_crop_reprocess.html98
1 files changed, 49 insertions, 49 deletions
diff --git a/en/devices/camera/camera3_crop_reprocess.html b/en/devices/camera/camera3_crop_reprocess.html
index 00c5bbed..1e24db3b 100644
--- a/en/devices/camera/camera3_crop_reprocess.html
+++ b/en/devices/camera/camera3_crop_reprocess.html
@@ -24,21 +24,21 @@
<h2 id="output-stream">Output streams</h2>
-<p> Unlike the old camera subsystem, which has 3-4 different ways of producing data
- from the camera (ANativeWindow-based preview operations, preview callbacks,
- video callbacks, and takePicture callbacks), the new subsystem operates solely
- on the ANativeWindow-based pipeline for all resolutions and output formats.
- Multiple such streams can be configured at once, to send a single frame to many
- targets such as the GPU, the video encoder, RenderScript, or app-visible buffers
+<p> Unlike the old camera subsystem, which has 3-4 different ways of producing data
+ from the camera (ANativeWindow-based preview operations, preview callbacks,
+ video callbacks, and takePicture callbacks), the new subsystem operates solely
+ on the ANativeWindow-based pipeline for all resolutions and output formats.
+ Multiple such streams can be configured at once, to send a single frame to many
+ targets such as the GPU, the video encoder, RenderScript, or app-visible buffers
(RAW Bayer, processed YUV buffers, or JPEG-encoded buffers).</p>
-<p>As an optimization, these output streams must be configured ahead of time, and
- only a limited number may exist at once. This allows for pre-allocation of
- memory buffers and configuration of the camera hardware, so that when requests
- are submitted with multiple or varying output pipelines listed, there won't be
+<p>As an optimization, these output streams must be configured ahead of time, and
+ only a limited number may exist at once. This allows for pre-allocation of
+ memory buffers and configuration of the camera hardware, so that when requests
+ are submitted with multiple or varying output pipelines listed, there won't be
delays or latency in fulfilling the request.</p>
-<p>To support backwards compatibility with the current camera API, at least 3
- simultaneous YUV output streams must be supported, plus one JPEG stream. This is
- required for video snapshot support with the application also receiving YUV
+<p>To support backwards compatibility with the current camera API, at least 3
+ simultaneous YUV output streams must be supported, plus one JPEG stream. This is
+ required for video snapshot support with the application also receiving YUV
buffers:</p>
<ul>
<li>One stream to the GPU/SurfaceView (opaque YUV format) for preview</li>
@@ -49,48 +49,48 @@
<p>The exact requirements are still being defined since the corresponding API
isn't yet finalized.</p>
<h2>Cropping</h2>
-<p>Cropping of the full pixel array (for digital zoom and other use cases where a
- smaller FOV is desirable) is communicated through the ANDROID_SCALER_CROP_REGION
- setting. This is a per-request setting, and can change on a per-request basis,
+<p>Cropping of the full pixel array (for digital zoom and other use cases where a
+ smaller FOV is desirable) is communicated through the ANDROID_SCALER_CROP_REGION
+ setting. This is a per-request setting, and can change on a per-request basis,
which is critical for implementing smooth digital zoom.</p>
-<p>The region is defined as a rectangle (x, y, width, height), with (x, y)
- describing the top-left corner of the rectangle. The rectangle is defined on the
- coordinate system of the sensor active pixel array, with (0,0) being the
- top-left pixel of the active pixel array. Therefore, the width and height cannot
- be larger than the dimensions reported in the ANDROID_SENSOR_ACTIVE_PIXEL_ARRAY
- static info field. The minimum allowed width and height are reported by the HAL
- through the ANDROID_SCALER_MAX_DIGITAL_ZOOM static info field, which describes
- the maximum supported zoom factor. Therefore, the minimum crop region width and
+<p>The region is defined as a rectangle (x, y, width, height), with (x, y)
+ describing the top-left corner of the rectangle. The rectangle is defined on the
+ coordinate system of the sensor active pixel array, with (0,0) being the
+ top-left pixel of the active pixel array. Therefore, the width and height cannot
+ be larger than the dimensions reported in the ANDROID_SENSOR_ACTIVE_PIXEL_ARRAY
+ static info field. The minimum allowed width and height are reported by the HAL
+ through the ANDROID_SCALER_MAX_DIGITAL_ZOOM static info field, which describes
+ the maximum supported zoom factor. Therefore, the minimum crop region width and
height are:</p>
-<pre>
+<pre class="devsite-click-to-copy">
{width, height} =
{ floor(ANDROID_SENSOR_ACTIVE_PIXEL_ARRAY[0] /
ANDROID_SCALER_MAX_DIGITAL_ZOOM),
floor(ANDROID_SENSOR_ACTIVE_PIXEL_ARRAY[1] /
ANDROID_SCALER_MAX_DIGITAL_ZOOM) }
- </pre>
-<p>If the crop region needs to fulfill specific requirements (for example, it needs
- to start on even coordinates, and its width/height needs to be even), the HAL
- must do the necessary rounding and write out the final crop region used in the
- output result metadata. Similarly, if the HAL implements video stabilization, it
- must adjust the result crop region to describe the region actually included in
- the output after video stabilization is applied. In general, a camera-using
- application must be able to determine the field of view it is receiving based on
+</pre>
+<p>If the crop region needs to fulfill specific requirements (for example, it needs
+ to start on even coordinates, and its width/height needs to be even), the HAL
+ must do the necessary rounding and write out the final crop region used in the
+ output result metadata. Similarly, if the HAL implements video stabilization, it
+ must adjust the result crop region to describe the region actually included in
+ the output after video stabilization is applied. In general, a camera-using
+ application must be able to determine the field of view it is receiving based on
the crop region, the dimensions of the image sensor, and the lens focal length.</p>
-<p>Since the crop region applies to all streams, which may have different aspect
- ratios than the crop region, the exact sensor region used for each stream may be
- smaller than the crop region. Specifically, each stream should maintain square
- pixels and its aspect ratio by minimally further cropping the defined crop
- region. If the stream's aspect ratio is wider than the crop region, the stream
- should be further cropped vertically, and if the stream's aspect ratio is
- narrower than the crop region, the stream should be further cropped
+<p>Since the crop region applies to all streams, which may have different aspect
+ ratios than the crop region, the exact sensor region used for each stream may be
+ smaller than the crop region. Specifically, each stream should maintain square
+ pixels and its aspect ratio by minimally further cropping the defined crop
+ region. If the stream's aspect ratio is wider than the crop region, the stream
+ should be further cropped vertically, and if the stream's aspect ratio is
+ narrower than the crop region, the stream should be further cropped
horizontally.</p>
-<p>In all cases, the stream crop must be centered within the full crop region, and
- each stream is only either cropped horizontally or vertical relative to the full
+<p>In all cases, the stream crop must be centered within the full crop region, and
+ each stream is only either cropped horizontally or vertical relative to the full
crop region, never both.</p>
-<p>For example, if two streams are defined, a 640x480 stream (4:3 aspect), and a
- 1280x720 stream (16:9 aspect), below demonstrates the expected output regions
- for each stream for a few sample crop regions, on a hypothetical 3 MP (2000 x
+<p>For example, if two streams are defined, a 640x480 stream (4:3 aspect), and a
+ 1280x720 stream (16:9 aspect), below demonstrates the expected output regions
+ for each stream for a few sample crop regions, on a hypothetical 3 MP (2000 x
1500 pixel array) sensor.</p>
</p>
Crop region: (500, 375, 1000, 750) (4:3 aspect ratio)<br/>
@@ -118,7 +118,7 @@ isn't yet finalized.</p>
<strong>Figure 3.</strong> 1:1 aspect ratio
</p>
<p>
- And a final example, a 1024x1024 square aspect ratio stream instead of the 480p
+ And a final example, a 1024x1024 square aspect ratio stream instead of the 480p
stream:<br/>
Crop region: (500, 375, 1000, 750) (4:3 aspect ratio)<br/>
1024x1024 stream crop: (625, 375, 750, 750)<br/>
@@ -129,9 +129,9 @@ isn't yet finalized.</p>
<strong>Figure 4.</strong> 4:3 aspect ratio, square
</p>
<h2 id="reprocessing">Reprocessing</h2>
-<p> Additional support for raw image files is provided by reprocessing support for RAW Bayer
- data. This support allows the camera pipeline to process a previously captured
- RAW buffer and metadata (an entire frame that was recorded previously), to
+<p> Additional support for raw image files is provided by reprocessing support for RAW Bayer
+ data. This support allows the camera pipeline to process a previously captured
+ RAW buffer and metadata (an entire frame that was recorded previously), to
produce a new rendered YUV or JPEG output.</p>
</body>