C++
-
namespace spectacularAI
Typedefs
-
using Matrix3d = std::array<std::array<double, 3>, 3>
A 3x3 matrix, row major (accessed as m[row][col]). Also note that when the matrix is symmetric (like covariance matrices), there is no difference between row-major and column-major orderings.
-
using Matrix4d = std::array<std::array<double, 4>, 4>
A 4x4 matrix, row major (accessed as m[row][col]). Typically used with homogeneous coordinates.
Enums
-
enum class TrackingStatus
6-DoF pose tracking status
Values:
-
enumerator INIT
Initial status when tracking starts and is still initializing
-
enumerator TRACKING
Tracking is accurate (but not globally referenced)
-
enumerator LOST_TRACKING
Tracking has failed. Outputs are no longer produced until the system recovers, which will be reported as another tracking state
-
enumerator INIT
Functions
-
double getHorizontalUncertainty(const Matrix3d &covariance)
Get a scalar horizontal uncertainty value in meters from a covariance matrix.
This is an RMS (“1-sigma”) value. In the the general anisotropic case, where the covariance matrix is a rotated version of diag(sx^2, sy^2), it is defined as sqrt(sx^2 + sy^2). This is also equivalent to the Frobenius norm of the 2x2 horizontal part of the covariance matrix. If the horizontal uncertainty is isotropic, this is the same as the 1D standard deviation along the X or Y axis.
-
double getVerticalUncertainty(const Matrix3d &covariance)
Vertical uncertainty (standard deviation / p67) from a covariance matrix
-
Matrix3d buildCovarianceMatrix(double horizontalSigma, double verticalSigma)
Construct a covariance matrix from vertical and horizontal uncertainty components specified as 1-sigma values (i.e., 1D standard deviations). The resulting covariance is isotropic in the XY plane.
-
double getQuaternionHeading(const Quaternion &q)
Get the yaw/heading angle of a local-to-world quaternion in degrees
-
double getQuaternionPitch(const Quaternion &q)
Pitch angle of a local-to-world quaternion in degrees
-
double getQuaternionRoll(const Quaternion &q)
Roll angle of a local-to-world quaternion in degrees
-
struct AutoExposureOutput
Auto exposure output structure. Each auto exposure output corresponds to a specific image with timestamp and (optional) tag. Contains exposure information that can be used to implement different auto exposure control algorithms, depending on the device.
Public Members
-
int deltaBrightness
Determines how much brightness should be changed (±0-255)
-
uint8_t brightness
Approx. image brightness (0-255)
-
uint32_t minExposureTime
Minimum exposure time for auto exposure (microseconds)
-
uint32_t maxExposureTime
Maximum exposure time for auto exposure (microseconds)
-
uint32_t minGain
Minimum auto exposure gain (ISO)
-
uint32_t maxGain
Maximum auto exposure gain (ISO)
-
double timestamp
Image timestamp (seconds)
-
int tag
Input frame tag (optional)
-
int deltaBrightness
-
struct Bitmap
Simple wrapper for a bitmap in RAM (not GPU RAM).
Public Functions
-
cv::Mat asOpenCV(bool flipColors = true)
Create an OpenCV matrix that may refer to this image as a shallow copy. use .clone() to make a deep copy if needed. If flipColors = true (default) automatically converts from RGB to BGR if necessary.
-
virtual ~Bitmap()
-
virtual int getWidth() const = 0
Image width in pixels
-
virtual int getHeight() const = 0
Image height in pixels
-
virtual ColorFormat getColorFormat() const = 0
Pixel color format / channel configuration
-
virtual const std::uint8_t *getDataReadOnly() const = 0
Data in row major order. Rows must be contiguous. 8-32 bits per pixel, as defined by ColorFormat.
-
virtual std::uint8_t *getDataReadWrite() = 0
Public Static Functions
-
static std::unique_ptr<Bitmap> create(int width, int height, ColorFormat colorFormat)
Create an bitmap with undefined contents
-
static std::unique_ptr<Bitmap> createReference(int width, int height, ColorFormat colorFormat, std::uint8_t *data, int rowStride = 0)
Create bitmap wrapper for plain data (not copied)
-
cv::Mat asOpenCV(bool flipColors = true)
-
struct Camera
Public Functions
-
virtual bool pixelToRay(const PixelCoordinates &pixel, Vector3d &ray) const = 0
Convert pixel coordinates to camera coordinates
- Parameters:
pixel –
ray –
- Returns:
true if conversion succeeded
-
virtual bool rayToPixel(const Vector3d &ray, PixelCoordinates &pixel) const = 0
Convert camera coordinates to pixel coordinates
- Parameters:
ray –
pixel –
- Returns:
true if conversion succeeded
-
virtual Matrix3d getIntrinsicMatrix() const = 0
Defined as the rectified intrinsic matrix if there’s distortion.
- Returns:
OpenCV convention fx 0 ppx 0 fy ppy 0 0 1
-
virtual Matrix4d getProjectionMatrixOpenGL(double nearClip, double farClip) const = 0
Project from camera coordinates to normalized device coordinates (NDC).
- Parameters:
nearClip –
farClip –
-
virtual ~Camera()
-
virtual bool pixelToRay(const PixelCoordinates &pixel, Vector3d &ray) const = 0
-
struct CameraPose
Pose of camera and conversions between world, camera, and pixel coordinates.
Public Functions
-
Matrix4d getWorldToCameraMatrix() const
Matrix that converts homogeneous world coordinates to homogeneous camera coordinates
-
Matrix4d getCameraToWorldMatrix() const
Matrix that converts homogeneous camera coordinates to homogeneous world coordinates
-
bool pixelToWorld(const PixelCoordinates &pixel, Vector3d &origin, Vector3d &ray) const
Convert pixel coordinates to rays in world coordinates.
- Parameters:
pixel –
origin – current camera position in world coordinates
ray – direction of ray from the origin
- Returns:
true if conversion succeeded
-
bool worldToPixel(const Vector3d &point, PixelCoordinates &pixel) const
Convert world coordinates to pixel coordinates
- Parameters:
point –
pixel –
- Returns:
true if conversion succeeded
-
Matrix4d getWorldToCameraMatrix() const
-
struct CameraRayTo3DMatch
3D ray from camera to 3D point in world coodinates match
-
struct CameraRayToWgsMatch
3D ray from camera to WGS-84 coodinates match
-
struct FeaturePoint
Sparse 3D feature point observed from a certain camera frame.
Public Members
-
int64_t id
An integer ID to identify same points in different revisions of the point cloud, e.g., the same point observed from a different camera frame.
-
PixelCoordinates pixelCoordinates
Pixel coordinates of the observation in the camera frame. If the pixel coordinates are not available, set to { -1, -1 }.
The coordinates are not available in case of trigger outputs that do not correspond to any camera frame.
-
int status = 0
Implementation-defined status/type
-
int64_t id
-
struct Frame
-
struct GnssVioOutput
Public Functions
-
virtual CameraPose getEnuCameraPose(int cameraId, WgsCoordinates enuOrigin) const = 0
Get the global pose of a particular camera. The “world” coordinate system of the camera pose is an East-North-Up system, whose origin is at the given WGS84 coordinates.
- Parameters:
cameraId – index of the camera whose pose is returned
enuOrigin – the origin of the ENU coordinate system
-
virtual ~GnssVioOutput()
Public Members
-
WgsCoordinates coordinates
Position in global coordinates in WGS84 (mean)
-
Quaternion orientation
Orientation of the device as quaternion representing camera-to-ENU transformation, i.e., a camera-to-world where the world coordinates are defined in an East-North-Up coordinate system. Note that the origin of the ENU coordinates system does not matter in this context.
See the helper methods for obtaining the Euler angles.
-
virtual CameraPose getEnuCameraPose(int cameraId, WgsCoordinates enuOrigin) const = 0
-
struct PixelCoordinates
Coordinates of an image pixel, subpixel accuracy
-
struct Pose
Represents the pose (position & orientation) of a device at a given time
Public Functions
Public Members
-
double time
Timestamp in seconds. Monotonically increasing
-
Vector3d position
3D position in a right-handed metric coordinate system where the z-axis points up
-
Quaternion orientation
Orientation quaternion in the same coordinate system as position
-
double time
-
struct Quaternion
Quaternion representation of a rotation. Hamilton convention.
-
struct Replay
Visual-Inertial Odometry API Replay
Public Functions
-
virtual ~Replay()
Destructor. Stops nay worker threads and frees other resources
-
virtual void startReplay() = 0
Starts replaying data in the background until replay is closed or entire session has been played.
-
virtual void runReplay() = 0
Starts replaying data and blocks until replay is closed or entire session has been played.
-
virtual bool replayOneLine() = 0
Plays a single line of data and returns false when there is no more data.
-
virtual void setPlaybackSpeed(double speed) = 0
Sets playbacks speed, 1.0 == real time, 2.0 == fast forward 2x, 0.5 == at half speed, -1.0 == unlimited. Defaults to 1.0.
-
virtual void setDryRun(bool isDryRun) = 0
If enabled, read and parse recorded data, but do not feed it to the algorithm. This is useful for performance measurements: a dry run can be used to estimate the data parsing time which does not happen in the real-time use case.
-
virtual void setOutputCallback(const std::function<void(VioOutputPtr)> &onOutput) = 0
Set output callback, called whenever new output is available.
The output remains valid as long as the shared pointer is alive. Can be passed to different threads and the last reference can be destructed in another thread. However, these should not be stored long term to avoid memory leaks.
Will be called fromt the same thread where gyroscope samples are added.
-
virtual void setExtendedOutputCallback(const std::function<void(VioOutputPtr, const FrameSet&)> &onOutput) = 0
Experimental!
-
virtual void close() = 0
Do not use this!
Public Static Functions
-
static Builder builder(const std::string &dataFolder, Vio::Builder vioBuilder)
Setup a Replay.
- Parameters:
dataFolder – Path to folder containing sensor data, and optionally calibration and VIO configuration.
vioBuilder – Vio builder. If Vio sets a calibration path, it will replace any calibration Replay reads from the data folder. If Vio sets a configuration, it will be appended to configuration Replay reads from the data folder, if any.
-
struct Builder
Public Functions
-
Builder setFfmpeg(bool enabled)
Optional. Set whether to use ffmpeg to read file or not. Default is true.
-
Builder setIgnoreFolderConfiguration(bool ignore)
Optional. Set whether to ignore vio_config.yaml in recording folder. Default is false.
-
Builder setStartTime(double time)
Start replay at this time, relative to the first frame timestamp. If not set, processing starts at the first input (which may be before the first frame).
-
Builder setFfmpeg(bool enabled)
-
virtual ~Replay()
-
struct Vector3d
Vector in R^3, can represent, e.g., velocity, position or angular velocity
-
struct Vector3f
Vector in R^3 (single precision)
-
struct VioOutput
Main output structure
Public Functions
-
virtual CameraPose getCameraPose(int cameraId) const = 0
Current pose in camera coordinates
- Parameters:
cameraId – 0 for primary, 1 for secondary camera
-
virtual std::string asJson() const
Returns the output to a JSON string if supported. Otherwise returns an empty string.
-
virtual ~VioOutput()
Public Members
-
TrackingStatus status
Current tracking status
-
Pose pose
The current pose, with the timestamp in the clock used for input sensor data and camera frames.
-
Vector3d angularVelocity
Angular velocity vector in SI units (rad/s) and the coordinate system used by pose.
-
std::vector<Pose> poseTrail
List of poses, where the first element corresponds to current pose and the following (zero or more) values are the recent smoothed historical positions
-
std::vector<FeaturePoint> pointCloud
Point cloud (list of FeaturePoints) that correspond to features currently seen in the primary camera frame.
-
int tag
The input frame tag. This is the value given in addFrame… methods
-
std::shared_ptr<const GnssVioOutput> globalPose
GNSS-VIO output if available, otherwise {} (nullptr)
-
virtual CameraPose getCameraPose(int cameraId) const = 0
-
struct Visualization
Experimental visualization API (TODO: improve documentation)
Public Functions
-
virtual std::unique_ptr<Bitmap> createRenderTarget() = 0
If supported, creates a default image with correct dimensions and type supported by Visualization.
-
virtual void update(VioOutputPtr output) = 0
Update the visualization based on VioOutput
-
virtual void render(Bitmap &bitmap) = 0
Render to a given bitmap that is either created with createRenderTarget or otherwise supported (implementation-specific)
-
virtual bool ready() const = 0
Check if the visualization is ready to be rendered
-
virtual ~Visualization()
-
virtual std::unique_ptr<Bitmap> createRenderTarget() = 0
-
struct WgsCoordinates
Global coordinates, WGS-84
-
namespace daiPlugin
Typedefs
-
using DaiPipelineAdapter = DaiPipeline
-
using DaiDeviceAdapter = DaiDevice
Functions
-
inline std::string getJsonStringOrEmptyValueSafe(const nlohmann::json &json, const char *key)
-
struct Configuration
Plugin and Spectacular AI VIO SDK configuration variables.
Public Members
-
bool useStereo = true
-
bool useSlam = true
-
bool useFeatureTracker = true
-
bool fastVio = false
-
bool lowLatency = false
-
bool forceRectified = false
-
bool forceUnrectified = false
-
bool useColor = false
-
bool useColorStereoCameras = false
-
bool useEncodedVideo = false
-
bool useGrayDepth = false
-
std::string mapSavePath = ""
-
std::string mapLoadPath = ""
-
bool useVioAutoExposure = false
-
std::string aprilTagPath = ""
-
bool meshRectification = false
-
bool depthScaleCorrection = false
-
bool extendedDisparity = false
-
bool fastImu = false
-
bool useReaderThread = true
-
unsigned imuFrequencyHz = 0
-
unsigned accFrequencyHz = 500
-
unsigned gyroFrequencyHz = 400
-
bool ensureSufficientUsbSpeed = true
-
bool silenceUsbWarnings = false
-
unsigned monoQueueSize = 8
-
unsigned depthQueueSize = 20
-
unsigned imuQueueSize = 50
-
int keyframeCandidateEveryNthFrame = 6
-
std::string inputResolution = "400p"
-
std::string recordingFolder = ""
-
bool recordingOnly = false
-
bool disableCameras = false
-
std::vector<std::string> parameterSets = {"wrapper-base", "oak-d", "live"}
-
std::map<std::string, std::string> internalParameters
Internal SDK parameter overrides (key-value pairs). Not safe for unsanitized user input.
-
bool useStereo = true
-
struct Hooks
When the plugin pulls something from the Depth AI API, those intermediary outputs can also be accessed through these hooks. Note that an input is not used by the plugin with the given Configuration, the corresponding hook is never triggered.
Depending on the configuration, the hooks may not be invoked from the same thread in which they were created.
Public Members
-
struct Pipeline
Spectacular AI pipeline for Depth AI API. It is recommended to build this using the constructors provided, but it is also possible to replace or configure all parts manually.
Public Functions
-
Pipeline(DaiPipeline daiPipeline)
Create a pipeline with the default configuration.
-
Pipeline(DaiPipeline daiPipeline, const Configuration &config)
Create a pipeline with custom configuration
-
Pipeline(DaiPipeline daiPipeline, const Configuration &config, std::function<void(mapping::MapperOutputPtr)> onMapperOutput)
Create a pipeline with custom configuration with Mapping API enabled
Public Members
-
const Configuration configuration
Spectacular AI VIO SDK configuration
-
IMUNode imu
-
MonoCameraNode monoLeft
-
MonoCameraNode monoRight
-
MonoCameraNode monoPrimary
-
MonoCameraNode monoSecondary
-
ColorCameraNode colorLeft
-
ColorCameraNode colorRight
-
ColorCameraNode colorPrimary
-
ColorCameraNode colorSecondary
-
ColorCameraNode color
-
ImageManipNode colorImageManip
-
StereoDepthNode stereo
-
FeatureTrackerNode featureTracker
-
ScriptNode scriptPrimary
-
ScriptNode scriptSecondary
-
VideoEncoderNode videoEncoderPrimary
-
VideoEncoderNode videoEncoderSecondary
-
XLinkInNode xinControl
-
XLinkOutNode xoutImu
-
XLinkOutNode xoutLeft
-
XLinkOutNode xoutRight
-
XLinkOutNode xoutScriptPrimary
-
XLinkOutNode xoutScriptSecondary
-
XLinkOutNode xoutPrimary
-
XLinkOutNode xoutSecondary
-
XLinkOutNode xoutFeatures
-
XLinkOutNode xoutDepth
-
std::string spectacularAIConfigurationYAML
-
const std::function<void(mapping::MapperOutputPtr)> onMapperOutput
-
Pipeline(DaiPipeline daiPipeline)
-
struct PipelineExtensions
Public Members
-
PipelineRAE rae
-
PipelineRAE rae
-
struct PipelineRAE
-
struct RAECameraNode
Public Members
-
ColorCameraNode colorLeft
-
ColorCameraNode colorRight
-
StereoDepthNode stereoDepth
-
FeatureTrackerNode featureTracker
-
ScriptNode scriptLeft
-
ScriptNode scriptRight
-
XLinkInNode xinControl
-
XLinkOutNode xoutLeft
-
XLinkOutNode xoutRight
-
XLinkOutNode xoutScriptLeft
-
XLinkOutNode xoutScriptRight
-
XLinkOutNode xoutFeatures
-
XLinkOutNode xoutDepth
-
ColorCameraNode colorLeft
-
class RawImgFrameTypeConverter
-
struct Session
VIO session. Should be created via Pipeline::startSession.
Public Functions
-
virtual bool hasOutput() const = 0
Check if new output is available
-
virtual std::shared_ptr<const VioOutput> getOutput() = 0
Get output from the queue, if available. If not, returns {}
-
virtual std::shared_ptr<const VioOutput> waitForOutput() = 0
Wait until new output is available and then return it.
-
virtual bool work() = 0
DEPRECATED, do not use.
-
virtual void addTrigger(double t, int tag) = 0
Add an external trigger input. Causes additional output corresponding to a certain timestamp to be generated.
- Parameters:
t – timestamp, monotonic float seconds
tag – additonal tag to indentify this particular trigger event. The default outputs corresponding to input camera frames have a tag 0.
-
virtual void addAbsolutePose(const Pose &pose, const Matrix3d &positionCovariance, double orientationVariance = -1) = 0
Add external pose information. VIO will correct its estimates to match the pose.
- Parameters:
pose – pose of the output coordinates in the external world coordinates
positionCovariance – position uncertainty as a covariance matrix in the external world coordinates
orientationVariance – optional orientation uncertainty as variance of angle
-
virtual void addGnss(double timestamp, const WgsCoordinates &coordinates, const Matrix3d &enuPositionCovariance) = 0
Add GNSS input (for GNSS-VIO fusion)
- Parameters:
timestamp – timestamp of the GNSS fix
coordinates – position mean
enuPositionCovariance – position uncertainty as a covariance matrix in an East-North-Up (ENU) coordinate system
-
virtual CameraPose getRgbCameraPose(const VioOutput &vioOut) const = 0
-
virtual ~Session()
-
virtual bool hasOutput() const = 0
-
namespace internal
Functions
-
Resolution getSupportedVideoResolution(const Configuration &config)
-
MonoResolution convertInputResolution(const std::string &reso)
-
ColorInputResolution convertColorInputResolution(const std::string &reso)
-
ColorResolution selectNearbyColorResolution(int monoHeight, const std::string &colorInputResolution)
-
std::string getYAMLConfiguration(const Configuration &config)
-
CameraCalibration getCalibration(const Configuration &config, const Matrix4d &imuToCamera, DaiDeviceAdapter device)
-
bool isDeviceRAE(const Configuration &config)
-
bool useRaeBackCameras(const Configuration &config)
Variables
-
const char *STREAM_TAG
-
struct CameraCalibration
Public Members
-
std::string json
-
spectacularAI::Matrix3d colorIntrinsics
-
std::vector<float> colorCameraDistortionCoeffs
-
Resolution stereoResolution
-
ColorResolution colorResolution
-
double depthScale
-
std::string json
-
struct ColorInputResolution
-
struct ColorResolution
-
struct MonoResolution
-
struct Resolution
-
Resolution getSupportedVideoResolution(const Configuration &config)
-
using DaiPipelineAdapter = DaiPipeline
-
namespace k4aPlugin
Functions
-
std::string getCalibration(const k4a_calibration_t &calibration, const Configuration &config)
-
k4a_device_configuration_t getK4AConfiguration(const std::string &colorResolution = "720p", int depthMode = K4A_DEPTH_MODE_WFOV_2X2BINNED, int frameRate = 30, bool useStereo = true)
-
std::string getYAMLConfiguration(const Configuration &config)
-
struct Configuration
Plugin and Spectacular AI VIO SDK configuration variables.
Public Members
-
bool useStereo = true
Set true to use RGB-D mode, false for mono (RGB camera only).
-
bool useSlam = true
Enable SLAM, required for ICP
-
bool fastVio = false
Enable more lightweight VIO parameters
-
bool alignedDepth = false
Align depth images to color images using Azure Kinect SDK.
Note that this affects ICP performance since the depth image has larger field of view than the color camera.
-
std::string recordingFolder = ""
-
bool recordingOnly = false
-
bool postProcessFinalMap = false
If true, final mapping API output is refined using post processing.
-
std::string aprilTagPath = ""
Path to .json file with AprilTag information. AprilTag detection is enabled when not empty. For the file format see: https://github.com/SpectacularAI/docs/blob/main/pdf/april_tag_instructions.pdf Note: sets useSlam=true
-
k4a_device_configuration_t k4aConfig = getK4AConfiguration()
Configuration for the k4a device.
-
bool ignoreOutputs = false
If false, user should read vio outputs from the Session. If true, the SDK discards vio outputs. Useful when VIO outputs are not needed: e.g. when using mapping API or recording.
-
std::map<std::string, std::string> internalParameters
Internal SDK parameter overrides (key-value pairs). Not safe for unsanitized user input.
-
bool useStereo = true
-
class Pipeline
The Pipeline object stores information about the device and also sets the default configuration, e.g., enables and configures necessary Azure Kinect streams and sensors. The actual VIO reading is started with startSession.
Public Functions
-
Pipeline()
Create a pipeline with the default configuration.
-
Pipeline(const Configuration &config)
Create a pipeline with custom configuration
-
Pipeline(const Configuration &config, std::function<void(mapping::MapperOutputPtr)> onMapperOutput)
Create a pipeline with custom configuration with Mapping API enabled
-
std::unique_ptr<Session> startSession()
Start a new VIO session on the background. Internally starts the k4a device and creates threads for camera and imu.
-
k4a_device_t getDeviceHandle()
Get an handle to the Azure Kinect device, can be used to adjust device settings. Note that adjusting some settings, e.g. device FPS can affect VIO performance.
-
~Pipeline()
-
Pipeline()
-
struct Session
Public Functions
-
virtual bool hasOutput() const = 0
Check if new output is available.
-
virtual std::shared_ptr<const VioOutput> getOutput() = 0
Get output from the queue, if available. If not returns {}.
-
virtual std::shared_ptr<const VioOutput> waitForOutput() = 0
Wait until new output is available and then return it.
-
virtual void ignoreOutputs(bool ignore) = 0
Set whether new outputs are added to the queue. Useful when the user is not interested in VIO outputs: e.g. when using mapping API or recording.
- Parameters:
ignore – If true, new outputs are not added to the queue. Otherwise, the user has make sure the outputs are pulled from the queue.
-
virtual void addTrigger(double t, int tag) = 0
Add an external trigger input. Causes additional output corresponding to a certain timestamp to be generated.
- Parameters:
t – timestamp, monotonic float seconds
tag – additonal tag to indentify this particular trigger event. The default outputs corresponding to input camera frames have a tag 0.
-
virtual ~Session()
-
virtual bool hasOutput() const = 0
-
std::string getCalibration(const k4a_calibration_t &calibration, const Configuration &config)
-
namespace mapping
Typedefs
-
using MapperOutputPtr = std::shared_ptr<const MapperOutput>
-
struct Frame
Public Members
-
CameraPose cameraPose
Camera pose information
-
double depthScale
Set when image type is depth. Depth image values are multiplied by this number to to make their unit meters (e.g., if depth is integer mm, this will be set to 1.0 / 1000)
-
std::vector<FeaturePoint> sparseFeatures
2D-to-3D sparse feature matches in this frame. Empty if not available for this frame type, e.g., depth frames.
-
std::vector<VisualMarker> visualMarkers
Visual Markers detected in this frame.
Only April Tags are supported currently. Always empty if April Tag detection is not enabled.
-
CameraPose cameraPose
-
struct FrameSet
Public Functions
Compute the depth map at the camera pose of the given frame (for example rgbFrame). If depthFrame is already aligned to the the given camera, returns a pointer to depthFrame.
Returns an undistorted version of the given frame (i.e., with camera model = undistorted pinhole). If the given frame is not distorted, returns a pointer to distortedFrame.
-
inline virtual ~FrameSet()
-
struct KeyFrame
Public Members
-
int64_t id
Unique ID for this keyframe. Monotonically increasing.
-
std::shared_ptr<PointCloud> pointCloud
(Optional) Dense point cloud for this keyframe in primary frame coordinates.
-
int64_t id
-
struct Map
Public Members
-
std::map<int64_t, std::shared_ptr<const KeyFrame>> keyFrames
ID to KeyFrame map. The shared_ptr objects are always non-empty. May be changed to a similar associative container in the future, do not rely on this being std::map.
-
std::map<int64_t, spectacularAI::Vector3d> mapPoints
List of sparse 3D map points (a.k.a. landmarks). These are observed from one or more KeyFrames, where the corresponding 2D pixel coordinates are stored in the sparseFeatures fields.
-
std::map<int64_t, std::shared_ptr<const KeyFrame>> keyFrames
-
struct MapperOutput
-
struct Mesh
A generic 3D mesh with a few optional properties (e.g., normals, colors, textures) Flexible enough to support both vertex-specific and face specific normals / colors.
Public Types
-
using IndexFace = std::array<std::uint32_t, 3>
Public Functions
-
virtual bool empty() const = 0
Returns true if the mesh has 0 vertices (and triangles).
-
virtual bool hasNormals() const = 0
Returns true if the mesh has optional normal data.
-
virtual std::size_t vertexCount() const = 0
Returns the number of vertices in the mesh.
-
virtual std::size_t normalCount() const = 0
Returns the number of normals in the mesh.
-
virtual std::size_t faceCount() const = 0
Returns the number of triangles in the mesh.
-
virtual Triangle getTriangle(std::size_t faceIndex) const = 0
Helper function to get a single triangle, yields an error if index >= size().
-
virtual const Vector3f *getPositionData() = 0
Efficient way to get all vertex positions, returns nullptr if the mesh is empty.
-
virtual const Vector3f *getNormalData() = 0
Efficient way to get all normals, returns nullptr if the mesh is empty or hasNormals() is false.
-
virtual const IndexFace *getFaceVertices() = 0
Efficient way to get all face vertice indices, returns nullptr if the mesh is empty.
-
virtual const IndexFace *getFaceNormals() = 0
Efficient way to get all face normal indices, returns nullptr if the mesh is empty or hasNormals() is false.
-
virtual void serializeToObj(const std::string &filename) const = 0
Save mesh as .obj file to a path specified by filename.
- Parameters:
filename – For instance, path/to/output/mesh.obj.
-
inline virtual ~Mesh()
-
struct Triangle
-
using IndexFace = std::array<std::uint32_t, 3>
-
struct PointCloud
Container for point clouds, where the points may have optional properties such as colors and normals.
Public Functions
-
virtual std::size_t size() const = 0
Returns the number of points in the point cloud.
-
virtual bool empty() const = 0
Returns true if the point cloud has 0 points.
-
virtual bool hasNormals() const = 0
Returns true if the point cloud has optional normal data.
-
virtual bool hasColors() const = 0
Returns true if the the point cloud has optional color data.
-
virtual Vector3f getPosition(std::size_t index) const = 0
Helper function to get a single point position vector, yields an error if index >= size().
-
virtual Vector3f getNormal(std::size_t index) const = 0
Helper function to get a single point normal vector, yields an error if hasNormals() is false or index >= size().
-
virtual std::array<std::uint8_t, 3> getRGB24(std::size_t index) const = 0
Helper function to get a single point color array, yields an error if hasColors() is false or index >= size().
-
virtual const Vector3f *getPositionData() = 0
Efficient way to get all point positions, returns nullptr if the point cloud is empty.
-
virtual const Vector3f *getNormalData() = 0
Efficient way to get all point normals, returns nullptr if the point cloud is empty or hasNormals() is false.
-
virtual const std::uint8_t *getRGB24Data() = 0
Efficient way to get all point colors, returns nullptr if the point cloud is empty or hasColors() is false.
-
inline virtual ~PointCloud()
-
virtual std::size_t size() const = 0
-
struct VisualMarker
A visual marker observation in a frame. Only April Tags are supported currently. To enable April Tag detection in the SDK, see https://github.com/SpectacularAI/docs/blob/main/pdf/april_tag_instructions.pdf
Public Members
-
int id
Visual marker ID
-
float size
Visual marker size in meters (note: 0 if not available).
-
PixelCoordinates center
Center of the visual marker in image pixel coordinates
-
std::array<PixelCoordinates, 4> corners
Corners of the visual marker in image pixel coordinates. In visual marker coordinate frame: index 0=bottom-left, 1=bottom-right, 2=top-right, 3=top-left
-
bool hasPose
Flag that indicates if marker has pose information. Currently markers with unknown size don’t have pose estimate.
-
int id
-
using MapperOutputPtr = std::shared_ptr<const MapperOutput>
-
namespace orbbecPlugin
-
struct Configuration
Plugin and Spectacular AI SDK configuration variables.
Public Functions
-
Configuration()
Default configuration, works with Femto Mega
-
Configuration(ob::Pipeline &obPipeline)
Device specific Configuration. Tested: Femto Mega, Femto Bolt, Astra2
Public Members
-
bool useSlam = true
Enable SLAM, required for ICP
-
bool fastVio = false
Enable more lightweight VIO parameters
-
bool alignedDepth = false
Align depth images to color images using OrbbecSDK.
Note that this can affect performance if the depth image has larger field of view than the color camera.
-
std::string recordingFolder = ""
If not empty, the session will be recorded to the given folder
-
bool recordingOnly = false
Enable recording only mode (more lightweight)
-
bool postProcessFinalMap = false
If true, final mapping API output is refined using post processing.
-
std::string aprilTagPath = ""
Path to .json file with AprilTag information. AprilTag detection is enabled when not empty. For the file format see: https://github.com/SpectacularAI/docs/blob/main/pdf/april_tag_instructions.pdf Note: sets useSlam=true
-
std::pair<int, int> rgbResolution = std::make_pair(1280, 720)
RGB video resolution
-
std::pair<int, int> depthResolution = std::make_pair(512, 512)
Depth video resolution
-
OBAccelSampleRate accFrequencyHz = OBAccelSampleRate::OB_SAMPLE_RATE_1_KHZ
Accelerometer frequency
-
OBGyroSampleRate gyroFrequencyHz = OBGyroSampleRate::OB_SAMPLE_RATE_1_KHZ
Gyroscope frequency
-
OBAccelFullScaleRange accRange = OBAccelFullScaleRange::OB_ACCEL_FS_4g
Accelerometer range
-
OBGyroFullScaleRange gyroRange = OBGyroFullScaleRange::OB_GYRO_FS_1000dps
Gyroscope range
-
bool ignoreOutputs = false
If false, user should read vio outputs from the Session. If true, the SDK discards vio outputs. Useful when VIO outputs are not needed: e.g. when using mapping API or recording.
-
Matrix4d imuToCameraRgb = {}
Option to override IMU to RGB camera extrinsic matrix. Required if not using Femto Mega/Bolt or Astra 2.
-
std::map<std::string, std::string> internalParameters
Internal SDK parameter overrides (key-value pairs). Not safe for unsanitized user input.
-
Configuration()
-
class Pipeline
The Pipeline object stores information about the device and also sets the default configuration, e.g., enables and configures necessary Orbbec streams and sensors. The actual VIO reading is started with startSession.
Public Functions
-
Pipeline(ob::Pipeline &obPipeline)
Create a pipeline with the default configuration.
-
Pipeline(ob::Pipeline &obPipeline, const Configuration &config)
Create a pipeline with custom configuration
-
Pipeline(ob::Pipeline &obPipeline, const Configuration &config, std::function<void(mapping::MapperOutputPtr)> onMapperOutput)
Create a pipeline with custom configuration with Mapping API enabled
-
~Pipeline()
-
Pipeline(ob::Pipeline &obPipeline)
-
struct Session
Public Functions
-
virtual bool hasOutput() const = 0
Check if new output is available.
-
virtual std::shared_ptr<const VioOutput> getOutput() = 0
Get output from the queue, if available. If not returns {}.
-
virtual std::shared_ptr<const VioOutput> waitForOutput() = 0
Wait until new output is available and then return it.
-
virtual void addTrigger(double t, int tag) = 0
Add an external trigger input. Causes additional output corresponding to a certain timestamp to be generated.
- Parameters:
t – timestamp, monotonic float seconds
tag – additonal tag to indentify this particular trigger event. The default outputs corresponding to input camera frames have a tag 0.
-
virtual void ignoreOutputs(bool ignore) = 0
DEPRECATED! Use Configuration.ignoreVioOutputs instead.
-
virtual ~Session()
-
virtual bool hasOutput() const = 0
-
namespace internal
Functions
-
std::string getCalibration(const OBCameraParam &calibration, const Configuration &config, const std::string &deviceName)
-
std::string getYAMLConfiguration(const Configuration &config, const std::string &deviceName)
-
std::string getCalibration(const OBCameraParam &calibration, const Configuration &config, const std::string &deviceName)
-
struct Configuration
-
namespace rsPlugin
-
struct Configuration
Plugin and Spectacular AI VIO SDK configuration variables.
Public Members
-
bool useStereo = true
-
bool useRgb = true
-
bool fastVio = false
-
bool alignedDepth = false
-
bool useSlam = true
-
bool useIcp = false
-
std::string inputResolution = "400p"
-
std::string mapSavePath = ""
-
std::string mapLoadPath = ""
-
std::string aprilTagPath = ""
-
std::string recordingFolder = ""
-
bool recordingOnly = false
-
bool postProcessFinalMap = false
-
std::map<std::string, std::string> internalParameters
Internal SDK parameter overrides (key-value pairs). Not safe for unsanitized user input.
-
bool useStereo = true
-
class Pipeline
The Pipeline object stores information about the device and also sets the default configuration, e.g., enables and configures necessary RealSense streams and sensors. The actual VIO reading is started with startSession. It is possible to change the RealSense configuration between calling configureX and startSession, but the VIO tracking is not guaranteed to be compatible with such customized configurations.
Public Functions
-
Pipeline()
Create a pipeline with the default configuration.
-
Pipeline(const Configuration &config)
Create a pipeline with custom configuration
-
Pipeline(const Configuration &config, std::function<void(mapping::MapperOutputPtr)> onMapperOutput)
Create a pipeline with custom configuration with Mapping API enabled
-
void setMapperCallback(const std::function<void(spectacularAI::mapping::MapperOutputPtr)> &onMapperOutput)
DEPRECATED: set the callback in Pipeline constructor instead.
-
std::unique_ptr<Session> startSession(rs2::config &config)
Start a new VIO session on the background. Internally creates an rs2::pipeline and starts it.
-
std::unique_ptr<Session> startSession(rs2::config &config, const std::function<void(const rs2::frame &frame)> &callback)
Start a new VIO session on the background. Also invoke the given callback on each rs2::frame. Performing heavy computations directly in the callback is not recommended.
-
~Pipeline()
-
Pipeline()
-
struct Session
VIO session.
Public Functions
-
virtual bool hasOutput() const = 0
Check if new output is available
-
virtual std::shared_ptr<const VioOutput> getOutput() = 0
Get output from the queue, if available. If not, returns {}
-
virtual std::shared_ptr<const VioOutput> waitForOutput() = 0
Wait until new output is available and then return it. Do not use with useReaderThread=false.
-
virtual void addTrigger(double t, int tag) = 0
Add an external trigger input. Causes additional output corresponding to a certain timestamp to be generated.
- Parameters:
t – timestamp, monotonic float seconds
tag – additonal tag to indentify this particular trigger event. The default outputs corresponding to input camera frames have a tag 0.
-
virtual ~Session()
-
virtual bool hasOutput() const = 0
-
struct Configuration
-
using Matrix3d = std::array<std::array<double, 3>, 3>