ColorCamera

ColorCamera node is a source of image frames. You can control in at runtime with the InputControl and InputConfig.

How to place it

pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.ColorCamera)
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::ColorCamera>();

Inputs and Outputs

                       ColorCamera node
               ┌──────────────────────────────┐
               │   ┌─────────────┐            │
               │   │    Image    │ raw        │     raw
               │   │    Sensor   │---┬--------├────────►
               │   └────▲────────┘   |        │
               │        │   ┌--------┘        │
               │      ┌─┴───▼─┐               │     isp
inputControl   │      │       │-------┬-------├────────►
──────────────►│------│  ISP  │ ┌─────▼────┐  │   video
               │      │       │ |          |--├────────►
               │      └───────┘ │   Image  │  │   still
inputConfig    │                │   Post-  │--├────────►
──────────────►│----------------|Processing│  │ preview
               │                │          │--├────────►
               │                └──────────┘  │
               └──────────────────────────────┘

Message types

  • inputConfig - ImageManipConfig

  • inputControl - CameraControl

  • raw - ImgFrame - RAW10 bayer data. Demo code for unpacking here

  • isp - ImgFrame - YUV420 planar (same as YU12/IYUV/I420)

  • still - ImgFrame - NV12, suitable for bigger size frames. The image gets created when a capture event is sent to the ColorCamera, so it’s like taking a photo

  • preview - ImgFrame - RGB (or BGR planar/interleaved if configured), mostly suited for small size previews and to feed the image into NeuralNetwork

  • video - ImgFrame - NV12, suitable for bigger size frames

ISP (image signal processor) is used for bayer transformation, demosaicing, noise reduction, and other image enhancements. It interacts with the 3A algorithms: auto-focus, auto-exposure, and auto-white-balance, which are handling image sensor adjustments such as exposure time, sensitivity (ISO), and lens position (if the camera module has a motorized lens) at runtime. Click here for more information.

Image Post-Processing converts YUV420 planar frames from the ISP into video/preview/still frames.

When setting sensor resolution to 12MP and using video, you will get 4K video output. 4K frames are cropped from 12MP frames (not downsampled).

Usage

pipeline = dai.Pipeline()
cam = pipeline.create(dai.node.ColorCamera)
cam.setPreviewSize(300, 300)
cam.setBoardSocket(dai.CameraBoardSocket.RGB)
cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
cam.setInterleaved(False)
cam.setColorOrder(dai.ColorCameraProperties.ColorOrder.RGB)
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::ColorCamera>();
cam->setPreviewSize(300, 300);
cam->setBoardSocket(dai::CameraBoardSocket::RGB);
cam->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
cam->setInterleaved(false);
cam->setColorOrder(dai::ColorCameraProperties::ColorOrder::RGB);

Examples of functionality

Reference

class depthai.node.ColorCamera

ColorCamera node. For use with color sensors.

class Connection

Connection between an Input and Output

class Id

Node identificator. Unique for every node on a single Pipeline

Properties

alias of depthai.ColorCameraProperties

getAssetManager(*args, **kwargs)

Overloaded function.

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

  1. getAssetManager(self: depthai.Node) -> depthai.AssetManager

Get node AssetManager as a const reference

getBoardSocket(self: depthai.node.ColorCamera)depthai.CameraBoardSocket

Retrieves which board socket to use

Returns

Board socket to use

getCamId(self: depthai.node.ColorCamera)int
getColorOrder(self: depthai.node.ColorCamera)depthai.ColorCameraProperties.ColorOrder

Get color order of preview output frames. RGB or BGR

getFp16(self: depthai.node.ColorCamera)bool

Get fp16 (0..255) data of preview output frames

getFps(self: depthai.node.ColorCamera)float

Get rate at which camera should produce frames

Returns

Rate in frames per second

getImageOrientation(self: depthai.node.ColorCamera)depthai.CameraImageOrientation

Get camera image orientation

getInputRefs(*args, **kwargs)

Overloaded function.

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

  1. getInputRefs(self: depthai.Node) -> List[depthai.Node.Input]

Retrieves reference to node inputs

getInputs(self: depthai.Node) → List[depthai.Node.Input]

Retrieves all nodes inputs

getInterleaved(self: depthai.node.ColorCamera)bool

Get planar or interleaved data of preview output frames

getIspHeight(self: depthai.node.ColorCamera)int

Get ‘isp’ output height

getIspSize(self: depthai.node.ColorCamera) → Tuple[int, int]

Get ‘isp’ output resolution as size, after scaling

getIspWidth(self: depthai.node.ColorCamera)int

Get ‘isp’ output width

getName(self: depthai.Node)str

Retrieves nodes name

getOutputRefs(*args, **kwargs)

Overloaded function.

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

  1. getOutputRefs(self: depthai.Node) -> List[depthai.Node.Output]

Retrieves reference to node outputs

getOutputs(self: depthai.Node) → List[depthai.Node.Output]

Retrieves all nodes outputs

getParentPipeline(*args, **kwargs)

Overloaded function.

  1. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

  2. getParentPipeline(self: depthai.Node) -> depthai.Pipeline

getPreviewHeight(self: depthai.node.ColorCamera)int

Get preview height

getPreviewKeepAspectRatio(self: depthai.node.ColorCamera)bool

See also

setPreviewKeepAspectRatio

Returns

Preview keep aspect ratio option

getPreviewSize(self: depthai.node.ColorCamera) → Tuple[int, int]

Get preview size as tuple

getPreviewWidth(self: depthai.node.ColorCamera)int

Get preview width

getResolution(self: depthai.node.ColorCamera)depthai.ColorCameraProperties.SensorResolution

Get sensor resolution

getResolutionHeight(self: depthai.node.ColorCamera)int

Get sensor resolution height

getResolutionSize(self: depthai.node.ColorCamera) → Tuple[int, int]

Get sensor resolution as size

getResolutionWidth(self: depthai.node.ColorCamera)int

Get sensor resolution width

getSensorCrop(self: depthai.node.ColorCamera) → Tuple[float, float]
Returns

Sensor top left crop coordinates

getSensorCropX(self: depthai.node.ColorCamera)float

Get sensor top left x crop coordinate

getSensorCropY(self: depthai.node.ColorCamera)float

Get sensor top left y crop coordinate

getStillHeight(self: depthai.node.ColorCamera)int

Get still height

getStillSize(self: depthai.node.ColorCamera) → Tuple[int, int]

Get still size as tuple

getStillWidth(self: depthai.node.ColorCamera)int

Get still width

getVideoHeight(self: depthai.node.ColorCamera)int

Get video height

getVideoSize(self: depthai.node.ColorCamera) → Tuple[int, int]

Get video size as tuple

getVideoWidth(self: depthai.node.ColorCamera)int

Get video width

getWaitForConfigInput(self: depthai.node.ColorCamera)bool

See also

setWaitForConfigInput

Returns

True if wait for inputConfig message, false otherwise

property id

Id of node

property initialControl

Initial control options to apply to sensor

property inputConfig

Input for ImageManipConfig message, which can modify crop parameters in runtime

Default queue is non-blocking with size 8

property inputControl

Input for CameraControl message, which can modify camera parameters in runtime

Default queue is blocking with size 8

property isp

Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.

Generated by the ISP engine, and the source for the ‘video’, ‘preview’ and ‘still’ outputs

property preview

Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.

Suitable for use with NeuralNetwork node

property raw

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor, and the source for the ‘isp’ output.

sensorCenterCrop(self: depthai.node.ColorCamera)None

Specify sensor center crop. Resolution size / video size

setBoardSocket(self: depthai.node.ColorCamera, boardSocket: depthai.CameraBoardSocket)None

Specify which board socket to use

Parameter boardSocket:

Board socket to use

setCamId(self: depthai.node.ColorCamera, arg0: int)None
setColorOrder(self: depthai.node.ColorCamera, colorOrder: depthai.ColorCameraProperties.ColorOrder)None

Set color order of preview output images. RGB or BGR

setFp16(self: depthai.node.ColorCamera, fp16: bool)None

Set fp16 (0..255) data type of preview output frames

setFps(self: depthai.node.ColorCamera, fps: float)None

Set rate at which camera should produce frames

Parameter fps:

Rate in frames per second

setImageOrientation(self: depthai.node.ColorCamera, imageOrientation: depthai.CameraImageOrientation)None

Set camera image orientation

setInterleaved(self: depthai.node.ColorCamera, interleaved: bool)None

Set planar or interleaved data of preview output frames

setIspScale(*args, **kwargs)

Overloaded function.

  1. setIspScale(self: depthai.node.ColorCamera, numerator: int, denominator: int) -> None

Set ‘isp’ output scaling (numerator/denominator), preserving the aspect ratio. The fraction numerator/denominator is simplified first to a irreducible form, then a set of hardware scaler constraints applies: max numerator = 16, max denominator = 63

  1. setIspScale(self: depthai.node.ColorCamera, scale: Tuple[int, int]) -> None

Set ‘isp’ output scaling, as a tuple <numerator, denominator>

  1. setIspScale(self: depthai.node.ColorCamera, horizNum: int, horizDenom: int, vertNum: int, vertDenom: int) -> None

Set ‘isp’ output scaling, per each direction. If the horizontal scaling factor (horizNum/horizDen) is different than the vertical scaling factor (vertNum/vertDen), a distorted (stretched or squished) image is generated

  1. setIspScale(self: depthai.node.ColorCamera, horizScale: Tuple[int, int], vertScale: Tuple[int, int]) -> None

Set ‘isp’ output scaling, per each direction, as <numerator, denominator> tuples

setPreviewKeepAspectRatio(self: depthai.node.ColorCamera, keep: bool)None

Specifies whether preview output should preserve aspect ratio, after downscaling from video size or not.

Parameter keep:

If true, a larger crop region will be considered to still be able to create the final image in the specified aspect ratio. Otherwise video size is resized to fit preview size

setPreviewSize(*args, **kwargs)

Overloaded function.

  1. setPreviewSize(self: depthai.node.ColorCamera, width: int, height: int) -> None

Set preview output size

  1. setPreviewSize(self: depthai.node.ColorCamera, size: Tuple[int, int]) -> None

Set preview output size, as a tuple <width, height>

setResolution(self: depthai.node.ColorCamera, resolution: depthai.ColorCameraProperties.SensorResolution)None

Set sensor resolution

setSensorCrop(self: depthai.node.ColorCamera, x: float, y: float)None

Specifies sensor crop rectangle

Parameter x:

Top left X coordinate

Parameter y:

Top left Y coordinate

setStillSize(*args, **kwargs)

Overloaded function.

  1. setStillSize(self: depthai.node.ColorCamera, width: int, height: int) -> None

Set still output size

  1. setStillSize(self: depthai.node.ColorCamera, size: Tuple[int, int]) -> None

Set still output size, as a tuple <width, height>

setVideoSize(*args, **kwargs)

Overloaded function.

  1. setVideoSize(self: depthai.node.ColorCamera, width: int, height: int) -> None

Set video output size

  1. setVideoSize(self: depthai.node.ColorCamera, size: Tuple[int, int]) -> None

Set video output size, as a tuple <width, height>

setWaitForConfigInput(self: depthai.node.ColorCamera, wait: bool)None

Specify to wait until inputConfig receives a configuration message, before sending out a frame.

Parameter wait:

True to wait for inputConfig message, false otherwise

property still

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.

property video

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

Suitable for use with VideoEncoder node

class dai::node::ColorCamera : public dai::NodeCRTP<Node, ColorCamera, ColorCameraProperties>

ColorCamera node. For use with color sensors.

Public Functions

ColorCamera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId)

Constructs ColorCamera node.

ColorCamera(const std::shared_ptr<PipelineImpl> &par, int64_t nodeId, std::unique_ptr<Properties> props)
int getScaledSize(int input, int num, int denom) const

Computes the scaled size given numerator and denominator

void setBoardSocket(CameraBoardSocket boardSocket)

Specify which board socket to use

Parameters
  • boardSocket: Board socket to use

CameraBoardSocket getBoardSocket() const

Retrieves which board socket to use

Return

Board socket to use

void setCamId(int64_t id)

Set which color camera to use.

int64_t getCamId() const

Get which color camera to use.

void setImageOrientation(CameraImageOrientation imageOrientation)

Set camera image orientation.

CameraImageOrientation getImageOrientation() const

Get camera image orientation.

void setColorOrder(ColorCameraProperties::ColorOrder colorOrder)

Set color order of preview output images. RGB or BGR.

ColorCameraProperties::ColorOrder getColorOrder() const

Get color order of preview output frames. RGB or BGR.

void setInterleaved(bool interleaved)

Set planar or interleaved data of preview output frames.

bool getInterleaved() const

Get planar or interleaved data of preview output frames.

void setFp16(bool fp16)

Set fp16 (0..255) data type of preview output frames.

bool getFp16() const

Get fp16 (0..255) data of preview output frames.

void setPreviewSize(int width, int height)

Set preview output size.

void setPreviewSize(std::tuple<int, int> size)

Set preview output size, as a tuple <width, height>

void setVideoSize(int width, int height)

Set video output size.

void setVideoSize(std::tuple<int, int> size)

Set video output size, as a tuple <width, height>

void setStillSize(int width, int height)

Set still output size.

void setStillSize(std::tuple<int, int> size)

Set still output size, as a tuple <width, height>

void setResolution(Properties::SensorResolution resolution)

Set sensor resolution.

Properties::SensorResolution getResolution() const

Get sensor resolution.

void setIspScale(int numerator, int denominator)

Set ‘isp’ output scaling (numerator/denominator), preserving the aspect ratio. The fraction numerator/denominator is simplified first to a irreducible form, then a set of hardware scaler constraints applies: max numerator = 16, max denominator = 63

void setIspScale(std::tuple<int, int> scale)

Set ‘isp’ output scaling, as a tuple <numerator, denominator>

void setIspScale(int horizNum, int horizDenom, int vertNum, int vertDenom)

Set ‘isp’ output scaling, per each direction. If the horizontal scaling factor (horizNum/horizDen) is different than the vertical scaling factor (vertNum/vertDen), a distorted (stretched or squished) image is generated

void setIspScale(std::tuple<int, int> horizScale, std::tuple<int, int> vertScale)

Set ‘isp’ output scaling, per each direction, as <numerator, denominator> tuples.

void setFps(float fps)

Set rate at which camera should produce frames

Parameters
  • fps: Rate in frames per second

float getFps() const

Get rate at which camera should produce frames

Return

Rate in frames per second

std::tuple<int, int> getPreviewSize() const

Get preview size as tuple.

int getPreviewWidth() const

Get preview width.

int getPreviewHeight() const

Get preview height.

std::tuple<int, int> getVideoSize() const

Get video size as tuple.

int getVideoWidth() const

Get video width.

int getVideoHeight() const

Get video height.

std::tuple<int, int> getStillSize() const

Get still size as tuple.

int getStillWidth() const

Get still width.

int getStillHeight() const

Get still height.

std::tuple<int, int> getResolutionSize() const

Get sensor resolution as size.

int getResolutionWidth() const

Get sensor resolution width.

int getResolutionHeight() const

Get sensor resolution height.

std::tuple<int, int> getIspSize() const

Get ‘isp’ output resolution as size, after scaling.

int getIspWidth() const

Get ‘isp’ output width.

int getIspHeight() const

Get ‘isp’ output height.

void sensorCenterCrop()

Specify sensor center crop. Resolution size / video size

void setSensorCrop(float x, float y)

Specifies sensor crop rectangle

Parameters
  • x: Top left X coordinate

  • y: Top left Y coordinate

std::tuple<float, float> getSensorCrop() const

Return

Sensor top left crop coordinates

float getSensorCropX() const

Get sensor top left x crop coordinate.

float getSensorCropY() const

Get sensor top left y crop coordinate.

void setWaitForConfigInput(bool wait)

Specify to wait until inputConfig receives a configuration message, before sending out a frame.

Parameters
  • wait: True to wait for inputConfig message, false otherwise

bool getWaitForConfigInput() const

See

setWaitForConfigInput

Return

True if wait for inputConfig message, false otherwise

void setPreviewKeepAspectRatio(bool keep)

Specifies whether preview output should preserve aspect ratio, after downscaling from video size or not.

Parameters
  • keep: If true, a larger crop region will be considered to still be able to create the final image in the specified aspect ratio. Otherwise video size is resized to fit preview size

bool getPreviewKeepAspectRatio()

See

setPreviewKeepAspectRatio

Return

Preview keep aspect ratio option

Public Members

CameraControl initialControl

Initial control options to apply to sensor

Input inputConfig = {*this, "inputConfig", Input::Type::SReceiver, false, 8, {{DatatypeEnum::ImageManipConfig, false}}}

Input for ImageManipConfig message, which can modify crop parameters in runtime

Default queue is non-blocking with size 8

Input inputControl = {*this, "inputControl", Input::Type::SReceiver, true, 8, {{DatatypeEnum::CameraControl, false}}}

Input for CameraControl message, which can modify camera parameters in runtime

Default queue is blocking with size 8

Output video = {*this, "video", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

Suitable for use with VideoEncoder node

Output preview = {*this, "preview", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries BGR/RGB planar/interleaved encoded frame data.

Suitable for use with NeuralNetwork node

Output still = {*this, "still", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries NV12 encoded (YUV420, UV plane interleaved) frame data.

The message is sent only when a CameraControl message arrives to inputControl with captureStill command set.

Output isp = {*this, "isp", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries YUV420 planar (I420/IYUV) frame data.

Generated by the ISP engine, and the source for the ‘video’, ‘preview’ and ‘still’ outputs

Output raw = {*this, "raw", Output::Type::MSender, {{DatatypeEnum::ImgFrame, false}}}

Outputs ImgFrame message that carries RAW10-packed (MIPI CSI-2 format) frame data.

Captured directly from the camera sensor, and the source for the ‘isp’ output.

Public Static Attributes

constexpr const char *NAME = "ColorCamera"

Private Members

std::shared_ptr<RawCameraControl> rawControl

Got questions?

We’re always happy to help with code or other questions you might have.