Skytoby

深入理解Android Camera架构四-高通CamX-CHI

深入理解Android Camera架构四-高通CamX-CHI

一、概述

回顾高通平台Camera HAL历史,之前高通采用的是QCamera & MM-Camera架构,但是为了更精细化控制底层硬件(Sensor/ISP等关键硬件),同时方便手机厂商自定义一些功能,现在提出了CamX-CHI架构,由于在CamX-CHI中完全看不到之前老架构的影子,所以它完全是一个全新的架构,它将一些高度统一的功能性接口抽离出来放到CamX中,将可定制化的部分放在CHI中供不同厂商进行修改,实现各自独有的特色功能,这样设计的好处显而易见,那便是即便开发者对于CamX并不是很了解,但是依然可以很方便的加入自定义的功能,从而降低了开发者在高通平台的开发门槛。

接下来我们以最直观的目录结构入手对该架构做一个简单的认识,以下便是CamX-CHI基本目录结构:

该部分代码主要位于 vendor/qcom/proprietary/ 目录下:

其中 camx 代表了通用功能性接口的代码实现集合(CamX),chi-cdk代表了可定制化需求的代码实现集合(CHI),从图中可以看出Camx部分对上作为HAL3接口的实现,对下通过v4l2框架与Kernel保持通讯,中间通过互相dlopen so库并获取对方操作接口的方式保持着与CHI的交互。

camx/中有如下几个主要目录:

1
2
3
4
5
6
7
8
9
10
11
12
13
├── ./build
│   └── ./build/infrastructure
└── ./src
├── ./src/chiiqutils
├── ./src/core
├── ./src/csl
├── ./src/hwl
├── ./src/lib
├── ./src/mapperutils
├── ./src/osutils
├── ./src/settings
├── ./src/swl
└── ./src/utils
  • core/ : 用于存放camx的核心实现模块,其中还包含了主要用于实现hal3接口的hal/目录,以及负责与CHI进行交互的chi/目录

    1
    2
    3
    4
    5
    6
    ├── ./build
    ├── ./chi
    ├── ./hal
    ├── ./halutils
    ├── ./ncs
    └── ./oem
  • csl/: 用于存放主要负责camx与camera driver的通讯模块,为camx提供了统一的Camera driver控制接口

1
2
3
4
5
├── ./android
├── ./build
├── ./common
├── ./hw
└── ./ifh
  • hwl/: 用于存放自身具有独立运算能力的硬件node,该部分node受csl管理

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    ├── ./bps
    ├── ./cvp
    ├── ./dspinterfaces
    ├── ./fd
    ├── ./ife
    ├── ./ipe
    ├── ./iqinterpolation
    ├── ./iqsetting
    ├── ./isphwsetting
    ├── ./ispiqmodule
    ├── ./jpeg
    ├── ./lrme
    ├── ./qsat
    ├── ./statsparser
    ├── ./tfe
    └── ./titan17x
  • swl/: 用于存放自身并不具有独立运算能力,必须依靠CPU才能实现的node

    1
    2
    3
    4
    5
    6
    7
    8
    9
    ├── ./eisv2
    ├── ./eisv3
    ├── ./fd
    ├── ./jpeg
    ├── ./offlinestats
    ├── ./ransac
    ├── ./sensor
    ├── ./stats
    └── ./swregistration

chi-cdk/中有如下几个主要目录:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
./api
│   ├── ./api/chromatix
│   │   ├── ./api/chromatix/presets
│   │   ├── ./api/chromatix/XML
│   │   └── ./api/chromatix/XSD
│   ├── ./api/common
│   ├── ./api/fd
│   ├── ./api/generated
│   │   └── ./api/generated/build
│   ├── ./api/isp
│   ├── ./api/ncs
│   ├── ./api/node
│   ├── ./api/pdlib
│   ├── ./api/sensor
│   ├── ./api/stats
│   └── ./api/utils
├── ./configs
├── ./core
│   ├── ./core/build
│   │   ├── ./core/build/android
│   │   └── ./core/build/linuxembedded
│   ├── ./core/chifeature2
│   │   ├── ./core/chifeature2/bitra
│   │   └── ./core/chifeature2/common
│   ├── ./core/chiframework
│   │   ├── ./core/chiframework/bitra
│   │   └── ./core/chiframework/common
│   ├── ./core/chiofflinepostproclib
│   │   ├── ./core/chiofflinepostproclib/bitra
│   │   └── ./core/chiofflinepostproclib/common
│   ├── ./core/chiofflinepostprocservice
│   │   ├── ./core/chiofflinepostprocservice/bitra
│   │   └── ./core/chiofflinepostprocservice/common
│   ├── ./core/chiusecase
│   │   ├── ./core/chiusecase/bitra
│   │   └── ./core/chiusecase/common
│   ├── ./core/chiutils
│   │   ├── ./core/chiutils/bitra
│   │   └── ./core/chiutils/common
│   └── ./core/lib
│   ├── ./core/lib/bitra
│   └── ./core/lib/common
├── ./oem
│   └── ./oem/qcom
│   ├── ./oem/qcom/actuator
│   ├── ./oem/qcom/blm
│   ├── ./oem/qcom/eebin
│   ├── ./oem/qcom/eeprom
│   ├── ./oem/qcom/fd
│   ├── ./oem/qcom/feature2
│   ├── ./oem/qcom/flash
│   ├── ./oem/qcom/formatmapper
│   ├── ./oem/qcom/module
│   ├── ./oem/qcom/node
│   ├── ./oem/qcom/ois
│   ├── ./oem/qcom/sensor
│   ├── ./oem/qcom/topology
│   ├── ./oem/qcom/tuning
│   ├── ./oem/qcom/tuningdeprecated
│   └── ./oem/qcom/utils
├── ./test
│   ├── ./test/chifeature2test
│   │   └── ./test/chifeature2test/common
│   ├── ./test/chifeature2testframework
│   │   └── ./test/chifeature2testframework/common
│   ├── ./test/chiofflinepostproctest
│   │   └── ./test/chiofflinepostproctest/common
│   ├── ./test/f2player
│   │   └── ./test/f2player/common
│   └── ./test/nativetest
│   └── ./test/nativetest/nativetestutils
└── ./tools
├── ./tools/binary_log
│   ├── ./tools/binary_log/analysis
│   ├── ./tools/binary_log/core
│   └── ./tools/binary_log/utils
├── ./tools/blmconfig
├── ./tools/buildbins
│   ├── ./tools/buildbins/linux64
│   ├── ./tools/buildbins/win32
│   └── ./tools/buildbins/yaml
├── ./tools/memprofile
└── ./tools/usecaseconverter
└── ./tools/usecaseconverter/utils
  • core/: 用于存放CHI实现的核心模块,负责与camx进行交互并且实现了CHI的总体框架以及具体的业务处理。
  • configs/: 用于存放平台相关的配置项
  • topology/: 用于存放用户自定的Usecase xml配置文件
  • node/: 用于存放用户自定义功能的node
  • module/: 用于存放不同sensor的配置文件,该部分在初始化sensor的时候需要用到
  • tuning/: 用于存放不同场景下的效果参数的配置文件
  • sensor/: 用于存放不同sensor的私有信息以及寄存器配置参数
  • actuator/: 用于存放不同对焦模块的配置信息
  • ois/: 用于存放防抖模块的配置信息
  • flash/: 存放着闪光灯模块的配置信息
  • eeprom/: 存放着eeprom外部存储模块的配置信息
  • fd/: 存放了人脸识别模块的配置信息

二、基本组件概念

2.1 Usecase

作为CamX-CHI中最大的抽象概念,其中包含了多条实现特定功能的Pipeline,具体实现是在CHI中通过Usecase类完成的,该类主要负责了其中的业务处理以及资源的管理。

Usecase类,提供了一系列通用接口,作为现有的所有Usecase的基类,其中,AdvancedCameraUsecase又继承于CameraUsecaseBase,相机中绝大部分场景会通过实例化AdvancedCameraUsecase来完成,它包括了几个主要接口:

  • Create(): 该方法是静态方法,用于创建一个AdvancedCameraUsecase实例,在其构造方法中会去获取XML中的相应的Usecase配置信息。
  • ExecuteCaptureRequest(): 该方法用于下发一次Request请求。
  • ProcessResultCb(): 该方法会在创建Session的过程中,作为回调方法注册到其中,一旦Session数据处理完成的时候便会调用该方法将结果发送到AdvancedCameraUsecase中。
  • ProcessDriverPartialCaptureResult(): 该方法会在创建Session的过程中,作为回调方法注册到其中,一旦Session中产生了partial meta data的时候,便会调用该方法将其发送至AdvancedCameraUsecase中。
  • ProcessMessageCb(): 该方法会在创建Session的过程中,作为回调方法注册到其中,一旦Session产生任何事件,便会调用该方法通知到AdvancedCameraUsecase中。
  • ExecuteFlush(): 该方法用于刷新AdvancedCameraUsecase。
  • Destroy(): 该方法用于安全销毁AdvancedCameraUsecase。

Usecase的可定制化部分被抽象出来放在了common_usecase.xml文件中,这里简单介绍其中的几个主要的标签含义:

  • UsecaseName: 代表了该Usecase的名字,后期根据这个名字找到这个Usecase的定义。
  • Targets: 用于表示用于输出的数据流的集合,其中包括了数据流的格式,输出Size的范围等。
  • Pipeline: 用于定义该Usecase可以是使用的所有Pipeline,这里必须至少定义一条Pipeline。

2.2 Feature

代表了一个特定的功能,该功能需要多条Pipeline组合起来实现,受Usecase统一管理,在CHI中通过Feature类进行实现,在XML中没有对应的定义,具体的Feature选取工作是在Usecase中完成的,通过在创建Feature的时候,传入Usecase的实例的方式,来和Usecase进行相互访问各自的资源。

以下是现有的Feature,其中Feature作为基类存在,定义了一系列通用方法。

几个常用的Feature:

  • FeatureHDR: 用于实现HDR功能,它负责管理内部的一条或者几条pipeline的资源以及它们的流转,最终输出具有HDR效果的图像。
  • FeatureMFNR: 用于实现MFNR功能,内部分为几个大的流程,分别包括Prefiltering、Blending、Postfilter以及最终的OfflineNoiseReproces(这一个是可选择使能的),每一个小功能中包含了各自的pipeline。
  • FeatureASD: 用于AI功能的实现,在预览的时候,接收每一帧数据,并且进行分析当前场景的AI识别输出结果,并其通过诸如到metadata方式给到上层,进行后续的处理。

2.3 Session

用于管理pipeline的抽象控制单元,一个Session中至少拥有一个pipeine,并且控制着所有的硬件资源,管控着每一个内部pipeline的request的流转以及数据的输入输出,它没有可定制化的部分,所以在CHI中的XML文件中并没有将Session作为一个独立的单元进行定义。

Session的实现主要通过CamX中的Session类,其主要接口如下:

  • Initialize(): 根据传入的参数SessionCreateData进行Session的初始化工作。
  • NotifyResult(): 内部的Pipeline通过该接口将结果发送到Session中。
  • ProcessCaptureRequest(): 该方法用于用户决定发送一个Request到Session中的时候调用。
  • StreamOn(): 通过传入的Pipeline句柄,开始硬件的数据传输。
  • StreamOff(): 通过传入的Pipeline句柄,停止硬件的数据传输。

2.4 Pipeline

作为提供单一特定功能的所有资源的集合,维护着所有硬件资源以及数据的流转,每一个Pipeline包括了其中的Node/Link,在CamX中通过Pipeline类进行实现,负责整条Pipeline的软硬件资源的维护以及业务逻辑的处理,接下来我们简单看下该类的几个主要接口:

  • Create(): 该方法是一个静态方法,根据传入的PipelineCreateInputData信息来实例化一个Pipeline对象。
  • StreamOn(): 通知Pipeline开始硬件的数据传输
  • StreamOff(): 通知Pipeline停止硬件的数据传输
  • FinalizePipeline(): 用于完成Pipeline的设置工作
  • OpenRequest(): open一个CSL用于流转的Request
  • ProcessRequest(): 开始下发Request
  • NotifyNodeMetadataDone(): 该方法是Pipeline提供给Node,当Node内部生成了metadata,便会调用该方法来通知metadata已经完成,最后当所有Node都通知Pipeline metadata已经完成,Pipeline 便会调用ProcessMetadataRequestIdDone通知Session。
  • NotifyNodePartialMetadataDone(): 该方法是Pipeline提供给Node,当Node内部生成了partial metadata,便会调用该方法来通知metadata已经完成,最后当所有Node都通知Pipeline metadata已经完成,Pipeline 便会调用ProcessPartialMetadataRequestIdDone通知Session。
  • SinkPortFenceSignaled(): 用来通知Session 某个sink port的fence处于被触发的状态。
  • NonSinkPortFenceSignaled(): 用来通知Session 某个non sink port的fence处于被触发的状态。

Pipeline中的Node以及连接方式都在XML中被定义,其主要包含了以下几个标签定义:

  • PipelineName: 用来定义该条Pipeline的名称
  • NodeList: 该标签中定义了该条Pipeline的所有的Node
  • PortLinkages: 该标签定义了Node上不同端口之间的连接关系

2.5 Node

作为单个具有独立处理功能的抽象模块,可以是硬件单元也可以是软件单元,关于Node的具体实现是CamX中的Node类来完成的,其中CamX-CHI中主要分为两个大类,一个是高通自己实现的Node包括硬件Node,一个是CHI中提供给用户进行实现的Node,其主要方法如下:

  • Create(): 该方法是静态方法,用于实例化一个Node对象。
  • ExecuteProcessRequest(): 该方法用于针对hwl node下发request的操作。
  • ProcessRequestIdDone(): 一旦该Node当前request已经处理完成,便会通过调用该方法通知Pipeline。
  • ProcessMetadataDone(): 一旦该Node的当前request的metadata已经生成,便会通过调用该方法通知到Pipeline。
  • ProcessPartialMetadataDone(): 一旦该Node的当前request的partial metadata已经生成,便会通过调用该方法通知到Pipeline。
  • CreateImageBufferManager(): 创建ImageBufferManager

其可定制化的部分作为标签在XML中进行定义:

  • NodeName: 用来定义该Node的名称
  • NodeId: 用来指定该Node的ID,其中IPE NodeId为65538,IFE NodeId为65536,用户自定义的NodeId为255。
  • NodeInstance: 用于定义该Node的当前实例的名称。
  • NodeInstanceId: 用于指定该Node实例的Id。

用于定义不同Port的连接,一个Port可以根据需要建立多条与其它从属于不同Node的Port的连接,它通过标签来进行定义,其中包括了作为输入端口,作为输出端口。

一个Link中包含了一个SrcPort和一个DstPort,分别代表了输入端口和输出端口,然后BufferProperties用于表示两个端口之间的buffer配置。

2.7 Port

作为Node的输入输出的端口,在XML文件中,标签用来定义一个输入端口,标签用来定义输出端口,每一个Node都可以根据需要使用一个或者多个输入输出端口,使用OutputPort以及InputPort结构体来进行在代码中定义。

  • PortId: 该端口的Id: 该端口的名称
  • NodeName: 该端口从属的Node名称
  • NodeId: 该端口从属的Node的Id
  • NodeInstance: 该端口从属的Node的实例名称
  • NodeInstanceId: 该端口从属的Node的实例的Id

三、组件结构关系

通过之前的介绍,我们对于几个基本组件有了一个比较清晰地认识,但是任何一个框架体系并不是仅靠组件胡乱堆砌而成的,相反,它们都必须基于各自的定位,按照各自所独有的行为模式,同时按照约定俗称的一系列规则组合起来,共同完成整个框架某一特定的功能。所以这里不得不产生一个疑问,在该框架中它们到底是如何组织起来的呢?它们之间的关系又是如何的呢? 接下来我们以下图入手开始进行分析:

由上图可以看到,几者是通过包含关系组合起来的,Usecase 包含Feature,而Feature包含了Session,Session又维护了内部的Pipeline的流转,而每一条pipeline中又通过Link将所有Node都连接了起来,接下我们就这几种关系详细讲解下:

首先,一个Usecase代表了某个特定的图像采集场景,比如人像场景,后置拍照场景等等,在初始化的时候通过根据上层传入的一些具体信息来进行创建,这个过程中,一方面实例化了特定的Usecase,这个实例是用来管理整个场景的所有资源,同时也负责了其中的业务处理逻辑,另一方面,获取了定义在XML中的特定Usecase,获取了用于实现某些特定功能的pipeline。

其次,在Usecase中,Feature是一个可选项,如果当前用户选择了HDR模式或者需要在Zoom下进行拍照等特殊功能的话,在Usecase创建过程中,便会根据需要创建一个或者多个Feature,一般一个Feature对应着一个特定的功能,如果场景中并不需要任何特定的功能,则也完全可以不使用也不创建任何Feature。

然后,每一个Usecase或者Feature都可以包含一个或者多个Session,每一个Session都是直接管理并负责了内部的Pipeline的数据流转,其中每一次的Request都是Usecase或者Featuret通过Session下发到内部的Pipeline进行处理,数据处理完成之后也是通过Session的方法将结果给到CHI中,之后是直接给到上层还是将数据封装下再次下发到另一个Session中进行后处理,这都交由CHI来决定。

其中,Session和Pipeline是一对多的关系,通常一个Session只包含了一条Pipeline,用于某个特定图像处理功能的实现,但是也不绝对,比如FeatureMFNR中包含的Session就包括了三条pipeline,又比如后置人像预览,也是用一个Session包含了两条分别用于主副双摄预览的Pipeline,主要是要看当前功能需要的pipeline数量以及它们之间是否存在一定关联。

同时,根据上面关于Pipeline的定义,它内部包含了一定数量的Node,并且实现的功能越复杂,所包含的Node也就越多,同时Node之间的连接也就越错综复杂,比如后置人像预览虚化效果的实现就是将拿到的主副双摄的图像通过RTBOfflinePreview这一条Pipeline将两帧图像合成一帧具有虚化效果的图像,从而完成了虚化功能。

最后Pipeline中的Node的连接方式是通过XML文件中的Link来进行描述的,每一个Link定义了一个输入端和输出端分别对应着不同Node上面的输入输出端口,通过这种方式就将其中的一个Node的输出端与另外一个Node的输入端,一个一个串联起来,等到图像数据从Pipeline的起始端开始输入的时候,便可以按照这种定义好的轨迹在一个一个Node之间进行流转,而在流转的过程中每经过一个Node都会在内部对数据进行处理,这样等到数据从起始端一直流转到最后一个Node的输出端的时候,数据就经过了很多次处理,这些处理效果最后叠加在一起便是该Pipeline所要实现的功能,比如降噪、虚化等等。

四、关键流程详解

4.1 Camera Provider 启动初始化

当系统启动的时候,Camera Provider主程序会被运行(具体启动参考系列架构三),在整个程序初始化的过程中会通过获取到的camera_module_t调用其get_number_of_camera接口获取底层支持的camera数量,由于是第一次获取,所以在CamX-CHI中会伴随着很多初始化动作,具体操作见下图:

4.1.1 HAL3Module初始化

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3.cpp]

1
2
3
4
5
6
static int get_number_of_cameras(void)
{
CAMX_ENTRYEXIT_SCOPE(CamxLogGroupHAL, SCOPEEventHAL3GetNumberOfCameras);

return static_cast<int>(HAL3Module::GetInstance()->GetNumCameras());
}

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3module.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
HAL3Module::HAL3Module()
{
CamxResult result = CamxResultSuccess;
CSLCameraPlatform CSLPlatform = {};

CAMX_LOG_CONFIG(CamxLogGroupHAL, "***************************************************");
CAMX_LOG_CONFIG(CamxLogGroupHAL, "SHA1: %s", CAMX_SHA1);
CAMX_LOG_CONFIG(CamxLogGroupHAL, "COMMITID: %s", CAMX_COMMITID);
CAMX_LOG_CONFIG(CamxLogGroupHAL, "BUILD TS: %s", CAMX_BUILD_TS);
CAMX_LOG_CONFIG(CamxLogGroupHAL, "***************************************************");

m_hChiOverrideModuleHandle = NULL;
m_numLogicalCameras = 0;
m_pStaticSettings = HwEnvironment::GetInstance()->GetStaticSettings();
result = CSLQueryCameraPlatform(&CSLPlatform);
m_hdmiRes.width = 0;
m_hdmiRes.height = 0;
m_hdmiRes.id = -1;
m_hdmiRes.have_hdmi_signal = 0;

CAMX_ASSERT(CamxResultSuccess == result);

CamX::Utils::Memset(&m_ChiAppCallbacks, 0, sizeof(m_ChiAppCallbacks));
CamX::Utils::Memset(m_pStaticMetadata, 0, sizeof(m_pStaticMetadata));

for (UINT32 sensor = 0; sensor < MaxNumImageSensors; sensor++)
{
m_torchStatus[sensor] = TorchModeStatusAvailableOff;
}
m_pMetadata = NULL;

// Set Camera Launch status to False at the time of constructor
DisplayConfigInterface::GetInstance()->SetCameraStatus(FALSE);

static const UINT NumCHIOverrideModules = 2;

UINT16 fileCount = 0;
const CHAR* pD = NULL;
INT fileIndexBitra = FILENAME_MAX;
INT fileIndex = 0;

CHAR moduleFileName[NumCHIOverrideModules * FILENAME_MAX];

switch (CSLPlatform.socId)
{
case CSLCameraTitanSocSM6350:
case CSLCameraTitanSocSM7225:
#if defined(_LP64)
fileCount = OsUtils::GetFilesFromPath("/vendor/lib64/camera/oem/bitra",
FILENAME_MAX,
&moduleFileName[0],
"*",
"chi",
"*",
"*",
&SharedLibraryExtension[0]);
if (0 == fileCount)
{
fileCount = OsUtils::GetFilesFromPath("/vendor/lib64/camera/qti/bitra",
FILENAME_MAX,
&moduleFileName[0],
"*",
"chi",
"*",
"*",
&SharedLibraryExtension[0]);
}
#else // using LP32
fileCount = OsUtils::GetFilesFromPath("/vendor/lib/camera/oem/bitra",
FILENAME_MAX,
&moduleFileName[0],
"*",
"chi",
"*",
"*",
&SharedLibraryExtension[0]);
if (0 == fileCount)
{
fileCount = OsUtils::GetFilesFromPath("/vendor/lib/camera/qti/bitra",
FILENAME_MAX,
&moduleFileName[0],
"*",
"chi",
"*",
"*",
&SharedLibraryExtension[0]);
}
#endif // _LP64
if (0 == fileCount)
{
fileCount = OsUtils::GetFilesFromPath(CHIOverrideModulePath,
FILENAME_MAX,
&moduleFileName[0],
"*",
"chi",
"*",
"*",
&SharedLibraryExtension[0]);
}
break;
default:
fileCount = OsUtils::GetFilesFromPath(CHIOverrideModulePath,
FILENAME_MAX,
&moduleFileName[0],
"*",
"chi",
"*",
"*",
&SharedLibraryExtension[0]);
break;
}

if (0 == fileCount)
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "FATAL: No CHI Module library found in %s - Cannot proceed", CHIOverrideModulePath);
}
else
{
pD = OsUtils::StrStr(&moduleFileName[0], "bitra");

// pD is NULL if Bitra is not present in first file index
if (pD != NULL)
{
fileIndexBitra = 0;
fileIndex = FILENAME_MAX;
}

if (NumCHIOverrideModules >= fileCount)
{
if (CSLPlatform.socId == CSLCameraTitanSocSM6350 || CSLPlatform.socId == CSLCameraTitanSocSM7225)
{
CAMX_LOG_INFO(CamxLogGroupHAL, "opening CHI Module - %s", &moduleFileName[fileIndexBitra]);
m_hChiOverrideModuleHandle = OsUtils::LibMap(&moduleFileName[fileIndexBitra]);
}
else
{
CAMX_LOG_INFO(CamxLogGroupHAL, "opening CHI Module - %s", &moduleFileName[fileIndex]);
m_hChiOverrideModuleHandle = OsUtils::LibMap(&moduleFileName[fileIndex]);
}

if (NULL != m_hChiOverrideModuleHandle)
{
CHIHALOverrideEntry funcCHIHALOverrideEntry =
reinterpret_cast<CHIHALOverrideEntry>(
CamX::OsUtils::LibGetAddr(m_hChiOverrideModuleHandle, "chi_hal_override_entry"));

if (NULL != funcCHIHALOverrideEntry)
{
funcCHIHALOverrideEntry(&m_ChiAppCallbacks);

CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_get_num_cameras);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_get_camera_info);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_get_info);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_finalize_override_session);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_initialize_override_session);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_override_process_request);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_override_flush);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_override_dump);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_teardown_override_session);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_extend_open);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_extend_close);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_remap_camera_id);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_modify_settings);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_get_default_request_settings);
CAMX_ASSERT(NULL != m_ChiAppCallbacks.chi_override_getFaceRoiInfo);

if ((NULL != m_ChiAppCallbacks.chi_get_num_cameras) &&
(NULL != m_ChiAppCallbacks.chi_get_camera_info) &&
(NULL != m_ChiAppCallbacks.chi_get_info) &&
(NULL != m_ChiAppCallbacks.chi_finalize_override_session) &&
(NULL != m_ChiAppCallbacks.chi_initialize_override_session) &&
(NULL != m_ChiAppCallbacks.chi_override_process_request) &&
(NULL != m_ChiAppCallbacks.chi_override_flush) &&
(NULL != m_ChiAppCallbacks.chi_override_dump) &&
(NULL != m_ChiAppCallbacks.chi_teardown_override_session) &&
(NULL != m_ChiAppCallbacks.chi_extend_open) &&
(NULL != m_ChiAppCallbacks.chi_extend_close) &&
(NULL != m_ChiAppCallbacks.chi_remap_camera_id) &&
(NULL != m_ChiAppCallbacks.chi_modify_settings) &&
(NULL != m_ChiAppCallbacks.chi_get_default_request_settings)&&
(NULL != m_ChiAppCallbacks.chi_override_getFaceRoiInfo))
{
CAMX_LOG_VERBOSE(CamxLogGroupHAL, "CHI Module library function pointers exchanged");
}
else
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "CHI Module library function pointers exchanged FAILED");
}
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Couldn't open CHI Module lib. All usecases will go thru HAL implementation");
}
}
else
{
if (fileCount > NumCHIOverrideModules)
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Cannot have more than %d CHI override module present", NumCHIOverrideModules);
}
}
}

if (NULL != m_ChiAppCallbacks.chi_get_num_cameras)
{
m_ChiAppCallbacks.chi_get_num_cameras(&m_numFwCameras, &m_numLogicalCameras);
}
else
{
CAMX_ASSERT_ALWAYS_MESSAGE("Override module is mandatory. Returning 0 cameras, and app will not behave properly");
m_numFwCameras = 0;
}

m_pThermalManager = ThermalManager::Create();
if (NULL == m_pThermalManager)
{
CAMX_LOG_WARN(CamxLogGroupHAL, "Failed to create ThermalManager");
// Not a fatal error. Camera can continue to operate without this
}

// There are arrays capped with a max number of sensors. If there are more than MaxNumImageSensors logical
// cameras, this assert will fire.
CAMX_ASSERT(m_numLogicalCameras < MaxNumImageSensors);
}

4.1.2 HwEnvironment初始化

[->vendor\qcom\proprietary\camx\src\core\camxhwenvironment.cpp]

1
2
3
4
5
6
7
8
9
10
const StaticSettings* HwEnvironment::GetStaticSettings() const
{
return m_pSettingsManager->GetStaticSettings();
}
HwEnvironment::HwEnvironment()
: m_initCapsStatus(InitCapsInvalid)
, m_pNCSObject(NULL)
{
Initialize();
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
CamxResult HwEnvironment::Initialize()
{
CamxResult result = CamxResultSuccess;
CSLInitializeParams params = { 0 };
SettingsManager* pStaticSettingsManager = SettingsManager::Create(NULL);
ExternalComponentInfo* pExternalComponent = GetExternalComponent();

m_pHWEnvLock = Mutex::Create("HwEnvLock");
CAMX_ASSERT(NULL != m_pHWEnvLock);

CAMX_ASSERT(NULL != pStaticSettingsManager);

if (NULL != pStaticSettingsManager)
{
const StaticSettings* pStaticSettings = pStaticSettingsManager->GetStaticSettings();

CAMX_ASSERT(NULL != pStaticSettings);

if (NULL != pStaticSettings)
{
params.mode = pStaticSettings->CSLMode;
params.emulatedSensorParams.enableSensorSimulation = pStaticSettings->enableSensorEmulation;
params.emulatedSensorParams.dumpSensorEmulationOutput = pStaticSettings->dumpSensorEmulationOutput;

OsUtils::StrLCpy(params.emulatedSensorParams.sensorEmulatorPath,
pStaticSettings->sensorEmulatorPath,
sizeof(pStaticSettings->sensorEmulatorPath));

OsUtils::StrLCpy(params.emulatedSensorParams.sensorEmulator,
pStaticSettings->sensorEmulator,
sizeof(pStaticSettings->sensorEmulator));

result = CSLInitialize(&params);

if (CamxResultSuccess == result)
{
// Query the camera platform
result = QueryHwContextStaticEntryMethods();
}

if (CamxResultSuccess == result)
{
m_pHwFactory = m_staticEntryMethods.CreateHwFactory();

if (NULL == m_pHwFactory)
{
CAMX_ASSERT_ALWAYS_MESSAGE("Failed to create the HW factory");
result = CamxResultEFailed;
}
}

if (CamxResultSuccess == result)
{
m_pSettingsManager = m_pHwFactory->CreateSettingsManager();

if (NULL == m_pSettingsManager)
{
CAMX_ASSERT_ALWAYS_MESSAGE("Failed to create the HW settings manager");
result = CamxResultEFailed;
}
}

if (CamxResultSuccess == result)
{
m_staticEntryMethods.GetHWBugWorkarounds(&m_workarounds);
}
}

pStaticSettingsManager->Destroy();
pStaticSettingsManager = NULL;
}

CAMX_ASSERT(NULL != pExternalComponent);
if ((CamxResultSuccess == result) && (NULL != pExternalComponent))
{
result = ProbeChiComponents(pExternalComponent, &m_numExternalComponent);
}

if (CamxResultSuccess == result)
{
// Load the OEM sensor capacity customization functions
CAMXCustomizeCAMXInterface camxInterface;
camxInterface.pGetHWEnvironment = HwEnvironment::GetInstance;
CAMXCustomizeEntry(&m_pOEMInterface, &camxInterface);
}

if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "FATAL ERROR: Raise SigAbort. HwEnvironment initialization failed");
m_numberSensors = 0;
OsUtils::RaiseSignalAbort();
}
else
{
m_initCapsStatus = InitCapsInitialize;
}
return result;
}

4.1.3 SettingsManager创建

[->vendor\qcom\proprietary\camx\src\core\camxsettingsmanager.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
SettingsManager* SettingsManager::Create(
StaticSettings* pStaticSettings)
{
CamxResult result = CamxResultSuccess;

// Since this creation function is only used for static initialization, we don't want to track memory.
SettingsManager* pSettingsManager = CAMX_NEW SettingsManager();
if (pSettingsManager != NULL)
{
result = pSettingsManager->Initialize(pStaticSettings);
if (CamxResultSuccess != result)
{
CAMX_DELETE pSettingsManager;
pSettingsManager = NULL;
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory; cannot create SettingsManager");
}

return pSettingsManager;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
CamxResult SettingsManager::Initialize(
StaticSettings* pStaticSettings)
{
CamxResult result = CamxResultSuccess;

// If the client gave us a static settings, use that. Otherwise, create our own.
if (NULL != pStaticSettings)
{
m_pStaticSettings = pStaticSettings;
}
else
{
m_pStaticSettings = reinterpret_cast<StaticSettings*>(CAMX_CALLOC(sizeof(StaticSettings)));
if (NULL != m_pStaticSettings)
{
m_internallyAllocatedStaticSettings = TRUE;
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory; cannot allocate static settings structure");
result = CamxResultENoMemory;
}
}

// Create the override settings file helper
m_pOverrideSettingsStore = OverrideSettingsFile::Create();

if (NULL == m_pOverrideSettingsStore)
{
result = CamxResultEFailed;
}

// Initialize the settings structure and override with user's values
if (CamxResultSuccess == result)
{
// Populate the default settings
InitializeDefaultSettings();
InitializeDefaultDebugSettings();

#if SETTINGS_DUMP_ENABLE
if (CamxResultSuccess == result)
{
// Print all current settings
DumpSettings();

// Dump the override settings from our override settings stores
m_pOverrideSettingsStore->DumpOverriddenSettings();
}
#endif // SETTINGS_DUMP_ENABLE

// Load the override settings from our override settings stores
result = LoadOverrideSettings(m_pOverrideSettingsStore);
if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Failed to load override settings.");
}

if (CamxResultSuccess == result)
{
result = LoadOverrideProperties(m_pOverrideSettingsStore, TRUE);
if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Failed to load override properties.");
}
}
}

// Validate the updated settings structures
if (CamxResultSuccess == result)
{
result = ValidateSettings();
}

#if SETTINGS_DUMP_ENABLE
if (CamxResultSuccess == result)
{
// Print all current settings
DumpSettings();

// Dump the override settings from our override settings stores
m_pOverrideSettingsStore->DumpOverriddenSettings();
}
#endif // SETTINGS_DUMP_ENABLE

// Push log settings to utils
UpdateLogSettings();

return result;
}

读取配置文件camxoverridesettings.xml

[->vendor\qcom\proprietary\camx\src\core\camxoverridesettingsfile.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
OverrideSettingsFile* OverrideSettingsFile::Create()
{
CamxResult result = CamxResultSuccess;
OverrideSettingsFile* pOverrideSettingsFile = CAMX_NEW OverrideSettingsFile();
if (pOverrideSettingsFile != NULL)
{
result = pOverrideSettingsFile->Initialize();
if (CamxResultSuccess != result)
{
CAMX_DELETE pOverrideSettingsFile;
pOverrideSettingsFile = NULL;
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory; cannot create OverrideSettingsFile");
}

return pOverrideSettingsFile;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
static const CHAR* OverrideSettingsTextFileName =
{
"camxoverridesettings.txt"
};

CamxResult OverrideSettingsFile::Initialize()
{
CamxResult result = CamxResultSuccess;

// Create the hash map to hold the override settings. Key is the settings string hash and value is a pointer to a
// SettingCacheEntry structure.
HashmapParams hashmapParams = {0};
hashmapParams.keySize = sizeof(UINT32);
hashmapParams.valSize = 0;
m_pOverrideSettingsCache = Hashmap::Create(&hashmapParams);
if (NULL == m_pOverrideSettingsCache)
{
result = CamxResultENoMemory;
}

if (CamxResultSuccess == result)
{
// Get the properties set on the device.
UpdatePropertyList();

// Since scratchString is used below to get the raw line from the override text file, the 128 should be way more than
// enough for the max length of whatever non-value stuff is specified on the override line (i.e. variable name, space,
// equals sign space, etc). Then, MaxStringLength is the max length string that a setting can have.
CHAR scratchString[MaxStringLength + 128] = {0};
FILE* pOverrideSettingsTextFile = NULL;

// Search the paths to find the files
for (UINT directory = 0; directory < CAMX_ARRAY_SIZE(OverrideSettingsTextFileDirectories); directory++)
{
{
OsUtils::SNPrintF(scratchString,
sizeof(scratchString),
"%s%s%s",
OverrideSettingsTextFileDirectories[directory],
PathSeparator,
OverrideSettingsTextFileName);
pOverrideSettingsTextFile = OsUtils::FOpen(scratchString, "r");
if (NULL == pOverrideSettingsTextFile)
{
// We didn't find an override settings text file, try another path
CAMX_LOG_VERBOSE(CamxLogGroupCore, "Could not find override settings text file at: %s", scratchString);
}
else
{
// We found an override settings text file.
CAMX_LOG_INFO(CamxLogGroupCore, "Opening override settings text file: %s", scratchString);

CHAR* pSettingString = NULL;
CHAR* pValueString = NULL;
CHAR* pContext = NULL;
UINT32 settingStringHash = 0;
CHAR strippedLine[MaxStringLength + 128];

// Parse the settings file one line at a time
while (NULL != OsUtils::FGetS(scratchString, sizeof(scratchString), pOverrideSettingsTextFile))
{
// First strip off all whitespace from the line to make it easier to handle enum type settings with
// combined values (e.g. A = B | C | D). After removing the whitespace, we only need to use '=' as the
// delimiter to extract the setting/value string pair (e.g. setting string = "A", value string =
// "B|C|D").
Utils::Memset(strippedLine, 0x0, sizeof(strippedLine));
OsUtils::StrStrip(strippedLine, scratchString, sizeof(strippedLine));

// Extract a setting/value string pair.
pSettingString = OsUtils::StrTokReentrant(strippedLine, "=", &pContext);
pValueString = OsUtils::StrTokReentrant(NULL, "=", &pContext);

// Check for invalid lines
if ((NULL == pSettingString) || (NULL == pValueString) || ('\0' == pValueString[0]))
{
continue;
}

// Discard this line if the setting string starts with a semicolon, indicating a comment
if (';' == pSettingString[0])
{
continue;
}

// Check whether the setting string is either an obfuscated hash or a human-readable setting name
if (('0' == pSettingString[0]) &&
(('x' == pSettingString[1]) || ('X' == pSettingString[1])))
{
// Setting string is a hex value, indicating it is a hash
settingStringHash = static_cast<UINT32>(OsUtils::StrToUL(pSettingString, NULL, 0));
}
else
{
// Setting string is a non-hex value, so get the hash
settingStringHash = GetSettingsStringHashValue(pSettingString);
}

// Check if there is an existing entry. If not, create a new one. If so, update the value.
SettingCacheEntry* pSettingCacheEntry = FindOverrideSetting(settingStringHash);
if (NULL == pSettingCacheEntry)
{
// No existing entry, add a key/value entry to the override settings cache
pSettingCacheEntry = static_cast<SettingCacheEntry*>(CAMX_CALLOC(sizeof(SettingCacheEntry)));
if (NULL == pSettingCacheEntry)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory; cannot allocate override setting entry");
result = CamxResultENoMemory;
break;
}

// Populate override setting entry data (value string is updated below)
pSettingCacheEntry->settingStringHash = settingStringHash;

OsUtils::StrLCpy(pSettingCacheEntry->keyString,
pSettingString,
sizeof(pSettingCacheEntry->keyString));

// Add the new override setting entry to override settings cache with the string has as the key
m_pOverrideSettingsCache->Put(&settingStringHash, pSettingCacheEntry);
}

// Set/overwrite value of setting
OsUtils::StrLCpy(pSettingCacheEntry->valueString,
pValueString,
sizeof(pSettingCacheEntry->valueString));
}

OsUtils::FClose(pOverrideSettingsTextFile);
pOverrideSettingsTextFile = NULL;
}
}
}
}

return result;
}

[->vendor\qcom\proprietary\camx\src\osutils\camxosutils.h]

1
static const CHAR OverrideSettingsPath[]   = "/vendor/etc/camera";

4.1.4 CSLModeManager初始化

[->vendor\qcom\proprietary\camx\src\csl\camxcsl.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
CamxResult CSLInitialize(
CSLInitializeParams* pInitializeParams)
{
CAMX_ENTRYEXIT_SCOPE(CamxLogGroupCSL, CamX::SCOPEEventCSLInitialize);

CAMX_STATIC_ASSERT(static_cast<INT>(CSLMode::CSLHwEnabled) ==
static_cast<INT>(CamX::CSLModeType::CSLModeHardware));
CAMX_STATIC_ASSERT(static_cast<INT>(CSLMode::CSLIFHEnabled) ==
static_cast<INT>(CamX::CSLModeType::CSLModeIFH));
CAMX_STATIC_ASSERT(static_cast<INT>(CSLMode::CSLPresilEnabled) ==
static_cast<INT>(CamX::CSLModeType::CSLModePresil));
CAMX_STATIC_ASSERT(static_cast<INT>(CSLMode::CSLPresilRUMIEnabled) ==
static_cast<INT>(CamX::CSLModeType::CSLModePresilRUMI));

if (NULL != g_pCSLModeManager)
{
CAMX_DELETE g_pCSLModeManager;
g_pCSLModeManager = NULL;
}

g_pCSLModeManager = CAMX_NEW CSLModeManager(pInitializeParams);

// Check g_pCSLModeManager before de-referencing it.
if (NULL != g_pCSLModeManager)
{
CSLJumpTable* pJumpTable = g_pCSLModeManager->GetJumpTable();
return pJumpTable->CSLInitialize();
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCSL, "CSL not initialized");
return CamxResultEFailed;
}
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
CamxResult CSLInitializeHW()
{
CamxResult result = CamxResultEFailed;
CHAR syncDeviceName[CSLHwMaxDevName] = {0};

if (FALSE == CSLHwIsHwInstanceValid())
{
if (TRUE == CSLHwEnumerateAndAddCSLHwDevice(CSLInternalHwVideodevice, CAM_VNODE_DEVICE_TYPE))
{
if (TRUE == CSLHwEnumerateAndAddCSLHwDevice(CSLInternalHwVideoSubdevice, CAM_CPAS_DEVICE_TYPE))
{
CAMX_LOG_VERBOSE(CamxLogGroupCSL, "Platform family=%d, version=%d.%d.%d, cpas version=%d.%d.%d",
g_CSLHwInstance.pCameraPlatform.family,
g_CSLHwInstance.pCameraPlatform.platformVersion.majorVersion,
g_CSLHwInstance.pCameraPlatform.platformVersion.minorVersion,
g_CSLHwInstance.pCameraPlatform.platformVersion.revVersion,
g_CSLHwInstance.pCameraPlatform.CPASVersion.majorVersion,
g_CSLHwInstance.pCameraPlatform.CPASVersion.minorVersion,
g_CSLHwInstance.pCameraPlatform.CPASVersion.revVersion);

if (FALSE == CSLHwEnumerateAndAddCSLHwDevice(CSLInternalHwVideoSubdeviceAll, 0))
{
CAMX_LOG_ERROR(CamxLogGroupCSL, "No KMD devices found");
}
else
{
CAMX_LOG_VERBOSE(CamxLogGroupCSL, "Total KMD subdevices found =%d", g_CSLHwInstance.kmdDeviceCount);
}
// Init the memory manager data structures here
CamX::Utils::Memset(g_CSLHwInstance.memManager.bufferInfo, 0, sizeof(g_CSLHwInstance.memManager.bufferInfo));
// Init the sync manager here
g_CSLHwInstance.lock->Lock();
g_CSLHwInstance.pSyncFW = CamX::SyncManager::GetInstance();
if (NULL != g_CSLHwInstance.pSyncFW)
{
CSLHwGetSyncHwDevice(syncDeviceName, CSLHwMaxDevName);
CAMX_LOG_VERBOSE(CamxLogGroupCSL, "Sync device found = %s", syncDeviceName);
result = g_CSLHwInstance.pSyncFW->Initialize(syncDeviceName);
if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCSL, "CSL failed to initialize SyncFW");
result = g_CSLHwInstance.pSyncFW->Destroy();
g_CSLHwInstance.pSyncFW = NULL;
}
}
g_CSLHwInstance.lock->Unlock();
CSLHwInstanceSetState(CSLHwValidState);
result = CamxResultSuccess;
CAMX_LOG_VERBOSE(CamxLogGroupCSL, "Successfully acquired requestManager");
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCSL, "Failed to acquire CPAS");
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCSL, "Failed to acquire requestManager invalid");
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCSL, "CSL in Invalid State");
}
return result;
}

4.1.5 总结

主要流程如下:

  1. 通过HAL3Module::GetInstance()静态方法实例化了HAL3Module对象,在其构造方法里面通过HwEnvironment::GetInstance()静态方法又实例化了HwEnvironment对象,在其构造方法中,实例化了SettingsManager对象,然后又在它构造方法中通过OverrideSettingsFile对象获取了位于/vendor/etc/camera/camoverridesettings.txt文件中的平台相关的配置信息(通过这种Override机制方便平台厂商加入自定义配置),该配置文件中,可以加入平台特定的配置项,比如可以通过设置multiCameraEnable的值来表示当前平台是否支持多摄,或者通过设置overrideLogLevels设置项来配置CamX-CHI部分的Log输出等级等等。
  2. 同时在HwEnvironment构造方法中会调用其Initialize方法,在该方法中实例化了CSLModeManager对象,并通过CSLModeManager提供的接口,获取了所有底层支持的硬件设备信息,其中包括了Camera Request Manager、CAPS模块(该驱动模块主要用于CSL获取Camera平台驱动信息,以及IPE/BPS模块的电源控制)以及Sensor/IPE/Flash等硬件模块,并且通过调用CSLHwInternalProbeSensorHW方法获取了当前设备安装的Sensor模组信息,并且将获取的信息暂存起来,等待后续阶段使用,总得来说在HwEnvironment初始化的过程中,通过探测方法获取了所有底层的硬件驱动模块,并将其信息存储下来供后续阶段使用。
  3. 之后通过调用HwEnvironment对象中的ProbeChiCompoents方法在/vendor/lib64/camera/components路径下找寻各个Node生成的So库,并获取Node提供的标准对外接口,这些Node不但包括CHI部分用户自定义的模块,还包括了CamX部分实现的硬件模块,并最后都将其都存入ExternalComponentInfo对象中,等待后续阶段使用。

另外在初始化阶段还有一个比较重要的操作就是CamX 与CHI是通过互相dlopen对方的So库,获取了对方的入口方法,最后通过彼此的入口方法获取了对方操作方法集合,之后再通过这些操作方法与对方进行通讯,其主要流程见下图:

从上图不难看出,在HAL3Module构造方法中会去通过dlopen方法加载com.qti.chi.override.so库,并通过dlsym映射出CHI部分的入口方法chi_hal_override_entry,并调用该方法将HAL3Module对像中的成员变量m_ChiAppCallbacks(CHIAppCallbacks)传入CHI中,其中包含了很多函数指针,这些函数指针分别对应着CHI部分的操作方法集中的方法,一旦进入到CHI中,就会将CHI本地的操作方法集合中的函数地址依次赋值给m_ChiAppCallbacks,这样CamX后续就可以通过这个成员变量调用到CHI中方法,从而保持了与CHI的通讯。

同样地,CHI中的ExtensionModule在初始化的时候,其构造方法中也会通过调用dlopen方法加载camera.qcom.so库,并将其入口方法ChiEntry通过dlsym映射出来,之后调用该方法,将g_chiContextOps(ChiContextOps,该结构体中定义了很多指针函数)作为参数传入CamX中,一旦进入CamX中,便会将本地的操作方法地址依次赋值给g_chiContextOps中的每一个函数指针,这样CHI之后就可以通过g_chiContextOps访问到CamX方法。

4.2 打开相机设备/初始化相机设备

一旦用户打开了相机应用,App中便会去调用CameraManager的openCamera方法,该方法之后会最终调用到Camera Service中的CameraService::connectDevice方法,然后通过ICameraDevice::open()这一个HIDL接口通知Provider,然后在Provider内部又通过调用之前获取的camera_module_t中methods的open方法来获取一个Camera 设备,对应于HAL中的camera3_device_t结构体,紧接着,在Provider中会继续调用获取到的camera3_device_t的initialize方法进行初始化动作。接下来我们便来详细分析下CamX-CHI对于open以及initialize的具体实现流程:

4.2.1 open

[->hardware\interfaces\camera\common\1.0\default\CameraModule.cpp]

1
2
3
4
5
6
7
8
9
camera_module_t *mModule;

int CameraModule::open(const char* id, struct hw_device_t** device) {
int res;
ATRACE_BEGIN("camera_module->open");
res = filterOpenErrorCode(mModule->common.methods->open(&mModule->common, id, device));
ATRACE_END();
return res;
}

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3entry.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// hw_module_methods_t Entry Points
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// open
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int open(
const struct hw_module_t* pHwModuleAPI,
const char* pCameraIdAPI,
struct hw_device_t** ppHwDeviceAPI)
{
/// @todo (CAMX-43) - Reload Jumptable from settings
JumpTableHAL3* pHAL3 = static_cast<JumpTableHAL3*>(g_dispatchHAL3.GetJumpTable());

CAMX_ASSERT(pHAL3);
CAMX_ASSERT(pHAL3->open);

return pHAL3->open(pHwModuleAPI, pCameraIdAPI, ppHwDeviceAPI);
}

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3.h]

1
2
3
4
5
6
7
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// hw_module_methods_t entry points
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int (*open)(
const struct hw_module_t*,
const char*,
struct hw_device_t**);

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
static int open(
const struct hw_module_t* pHwModuleAPI,
const char* pCameraIdAPI,
struct hw_device_t** ppHwDeviceAPI)
{
CAMX_ENTRYEXIT_SCOPE(CamxLogGroupHAL, SCOPEEventHAL3Open);

CamxResult result = CamxResultSuccess;
CAMX_ASSERT(NULL != pHwModuleAPI);
CAMX_ASSERT(NULL != pHwModuleAPI->id);
CAMX_ASSERT(NULL != pHwModuleAPI->name);
CAMX_ASSERT(NULL != pHwModuleAPI->author);
CAMX_ASSERT(NULL != pHwModuleAPI->methods);
CAMX_ASSERT('\0' != pCameraIdAPI[0]);
CAMX_ASSERT(NULL != pCameraIdAPI);
CAMX_ASSERT(NULL != ppHwDeviceAPI);

if ((NULL != pHwModuleAPI) &&
(NULL != pHwModuleAPI->id) &&
(NULL != pHwModuleAPI->name) &&
(NULL != pHwModuleAPI->author) &&
(NULL != pHwModuleAPI->methods) &&
(NULL != pCameraIdAPI) &&
('\0' != pCameraIdAPI[0]) &&
(NULL != ppHwDeviceAPI))
{
CamX::OfflineLogger* pOfflineLoggerASCII = CamX::OfflineLogger::GetInstance(OfflineLoggerType::ASCII);
if (NULL != pOfflineLoggerASCII)
{
pOfflineLoggerASCII->NotifyCameraOpen();
}
CamX::OfflineLogger* pOfflineLoggerBinary = CamX::OfflineLogger::GetInstance(OfflineLoggerType::BINARY);
if (NULL != pOfflineLoggerBinary)
{
pOfflineLoggerBinary->NotifyCameraOpen();
}

UINT32 cameraId = 0;
UINT32 logicalCameraId = 0;
CHAR* pNameEnd = NULL;

cameraId = OsUtils::StrToUL(pCameraIdAPI, &pNameEnd, 10);

const StaticSettings* pStaticSettings = HwEnvironment::GetInstance()->GetStaticSettings();

// Default value of forceCameraID override is set as 25, which is an emperical high value that is not expected for any camera
// Value other than 25 when given for forceCameraID, is set as the cameraID during camera open
// Physical camera ID's of 0, 1, 2 are not to be forced to other logical camera ID's
if ((25 != pStaticSettings->forceCameraID) && (0 != cameraId) && (1 != cameraId) && (2 != cameraId))
{
CAMX_LOG_CONFIG(CamxLogGroupHAL, "cameraId from App: %d, Forced camera id: %d", cameraId, pStaticSettings->forceCameraID);
cameraId = pStaticSettings->forceCameraID;
}

if (*pNameEnd != '\0')
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid camera id: %s", pCameraIdAPI);
// HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
result = CamxResultEInvalidArg;
}

if (CamxResultSuccess == result)
{
// Framework camera ID should only be known to these static landing functions, and the remap function
logicalCameraId = GetCHIAppCallbacks()->chi_remap_camera_id(cameraId, IdRemapCamera);

// Reserve the Torch resource for camera.
// If torch already switched on, then turn it off and reserve for camera.
HAL3Module::GetInstance()->ReserveTorchForCamera(
GetCHIAppCallbacks()->chi_remap_camera_id(cameraId, IdRemapTorch), cameraId);

// Sample code to show how the VOID* can be used in ExtendOpen
CHIEXTENDSETTINGS extend = { 0 };
CHISETTINGTOKEN tokenList[NumExtendSettings] = { { 0 } };
extend.pTokens = tokenList;

GenerateExtendOpenData(NumExtendSettings, &extend);

// Reserve the camera to detect if it is already open or too many concurrent are open
CAMX_LOG_CONFIG(CamxLogGroupHAL, "HalOp: Begin OPEN, logicalCameraId: %d, cameraId: %d",
logicalCameraId, cameraId);
result = HAL3Module::GetInstance()->ProcessCameraOpen(logicalCameraId, &extend);
}

if (CamxResultSuccess == result)
{
// Sample code to show how the VOID* can be used in ModifySettings
ChiModifySettings setting[NumExtendSettings] = { { { 0 } } };
GenerateModifySettingsData(setting);

for (UINT i = 0; i < NumExtendSettings; i++)
{
GetCHIAppCallbacks()->chi_modify_settings(&setting[i]);
}

CAMX_LOG_INFO(CamxLogGroupHAL, "Open: overrideCameraClose is %d , overrideCameraOpen is %d ",
pStaticSettings->overrideCameraClose, pStaticSettings->overrideCameraOpen);

const HwModule* pHwModule = reinterpret_cast<const HwModule*>(pHwModuleAPI);
HALDevice* pHALDevice = HALDevice::Create(pHwModule, logicalCameraId, cameraId);

if (NULL != pHALDevice)
{
camera3_device_t* pCamera3Device = reinterpret_cast<camera3_device_t*>(pHALDevice->GetCameraDevice());
camera3_device_t& rCamera3Device = *pCamera3Device;
*ppHwDeviceAPI = &pCamera3Device->common;
BINARY_LOG(LogEvent::HAL3_Open, rCamera3Device, logicalCameraId, cameraId);
}
else
{
// HAL interface requires -ENODEV (EFailed) for all other internal errors
result = CamxResultEFailed;
CAMX_LOG_ERROR(CamxLogGroupHAL, "Error while opening camera");

CHIEXTENDSETTINGS extend = { 0 };
CHISETTINGTOKEN tokenList[NumExtendSettings] = { { 0 } };
extend.pTokens = tokenList;

GenerateExtendCloseData(NumExtendSettings, &extend);

// Allow the camera to be reopened later
HAL3Module::GetInstance()->ProcessCameraClose(logicalCameraId, &extend);

ChiModifySettings setting[NumExtendSettings] = { { { 0 } } };
GenerateModifySettingsData(setting);

for (UINT i = 0; i < NumExtendSettings; i++)
{
GetCHIAppCallbacks()->chi_modify_settings(&setting[i]);
}
}
}

if (CamxResultSuccess != result)
{
// If open fails, then release the Torch resource that we reserved.
HAL3Module::GetInstance()->ReleaseTorchForCamera(
GetCHIAppCallbacks()->chi_remap_camera_id(cameraId, IdRemapTorch), cameraId);
}
CAMX_LOG_CONFIG(CamxLogGroupHAL, "HalOp: End OPEN, logicalCameraId: %d, cameraId: %d",
logicalCameraId, cameraId);
}
else
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument(s) for open()");
// HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
result = CamxResultEInvalidArg;
}

return Utils::CamxResultToErrno(result);
}

4.2.2 HALDevice初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
HALDevice* HALDevice::Create(
const HwModule* pHwModule,
UINT32 cameraId,
UINT32 frameworkId)
{
CamxResult result = CamxResultENoMemory;
HALDevice* pHALDevice = CAMX_NEW HALDevice;

if (NULL != pHALDevice)
{
pHALDevice->m_fwId = frameworkId;

result = pHALDevice->Initialize(pHwModule, cameraId);

if (CamxResultSuccess != result)
{
pHALDevice->Destroy();

pHALDevice = NULL;
}
}

return pHALDevice;
}

HALDevice::Initialize

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
CamxResult HALDevice::Initialize(
const HwModule* pHwModule,
UINT32 cameraId)
{
CamxResult result = CamxResultSuccess;

m_cameraId = cameraId;

if (CamxResultSuccess == result)
{
m_camera3Device.hwDevice.tag = HARDWARE_DEVICE_TAG; /// @todo (CAMX-351) Get from local macro

#if ((CAMX_ANDROID_API) && (CAMX_ANDROID_API >= 28)) // Android-P or better
m_camera3Device.hwDevice.version = CAMERA_DEVICE_API_VERSION_3_5;
#else
m_camera3Device.hwDevice.version = CAMERA_DEVICE_API_VERSION_3_3;
#endif // ((CAMX_ANDROID_API) && (CAMX_ANDROID_API >= 28))

m_camera3Device.hwDevice.close = reinterpret_cast<CloseFunc>(GetHwDeviceCloseFunc());
m_camera3Device.pDeviceOps = reinterpret_cast<Camera3DeviceOps*>(GetCamera3DeviceOps());
m_camera3Device.pPrivateData = this;
// NOWHINE CP036a: Need exception here
m_camera3Device.hwDevice.pModule = const_cast<HwModule*>(pHwModule);

m_HALCallbacks.process_capture_result = ProcessCaptureResult;
m_HALCallbacks.notify_result = Notify;
}

ClearFrameworkRequestBuffer();

SIZE_T entryCapacity;
SIZE_T dataSize;
HAL3MetadataUtil::CalculateSizeAllMeta(&entryCapacity, &dataSize, TagSectionVisibleToFramework);

m_pResultMetadata = HAL3MetadataUtil::CreateMetadata(
entryCapacity,
dataSize);

for (UINT i = RequestTemplatePreview; i < RequestTemplateCount; i++)
{
if (NULL == m_pDefaultRequestMetadata[i])
{
ConstructDefaultRequestSettings(static_cast<Camera3RequestTemplate>(i));
}
}

const StaticSettings* pStaticSettings = HwEnvironment::GetInstance()->GetStaticSettings();

m_numPartialResult = pStaticSettings->numMetadataResults;

/* We will increment the Partial result count by 1 if CHI also has its own implementation */
if (CHIPartialDataSeparate == pStaticSettings->enableCHIPartialData)
{
m_numPartialResult++;
}

m_tracingZoom = FALSE;
m_tunnellingEnabled = HAL3Module::GetInstance()->m_tunnellingEnabled;

return result;
}

4.2.3 initialize

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3entry.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// camera3_device_ops_t Entry Points
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// initialize
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int initialize(
const struct camera3_device* pCamera3DeviceAPI,
const camera3_callback_ops_t* pCamera3CbOpsAPI)
{
JumpTableHAL3* pHAL3 = static_cast<JumpTableHAL3*>(g_dispatchHAL3.GetJumpTable());

CAMX_ASSERT(pHAL3);
CAMX_ASSERT(pHAL3->initialize);

g_HAL3Entry.m_pCbOpsLock->Lock();

// See if there is already an entry for this device
Camera3CbOpsRedirect* pCamera3CbOps = NULL;
LDLLNode* pNode = g_HAL3Entry.m_cbOpsList.Head();

while (NULL != pNode)
{
if (pCamera3DeviceAPI == static_cast<Camera3CbOpsRedirect*>(pNode->pData)->pCamera3Device)
{
pCamera3CbOps = static_cast<Camera3CbOpsRedirect*>(pNode->pData);
break;
}
pNode = LightweightDoublyLinkedList::NextNode(pNode);
}

// Else create and add to list
if (NULL == pCamera3CbOps)
{
pNode = reinterpret_cast<LDLLNode*>(CAMX_CALLOC(sizeof(LDLLNode)));

if (NULL != pNode)
{
pCamera3CbOps = reinterpret_cast<Camera3CbOpsRedirect*>(CAMX_CALLOC(sizeof(Camera3CbOpsRedirect)));

if (NULL != pCamera3CbOps)
{
pNode->pData = pCamera3CbOps;
g_HAL3Entry.m_cbOpsList.InsertToTail(pNode);
}
}
}

// List management may have failed, skip override on failure
if (NULL != pCamera3CbOps)
{
pCamera3CbOps->cbOps.process_capture_result = process_capture_result;
pCamera3CbOps->cbOps.notify = notify;
pCamera3CbOps->pCamera3Device = pCamera3DeviceAPI;
pCamera3CbOps->pCbOpsAPI = pCamera3CbOpsAPI;
pCamera3CbOpsAPI = &(pCamera3CbOps->cbOps);
}
g_HAL3Entry.m_pCbOpsLock->Unlock();

return pHAL3->initialize(pCamera3DeviceAPI, pCamera3CbOpsAPI);
}

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// initialize
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
static int initialize(
const struct camera3_device* pCamera3DeviceAPI,
const camera3_callback_ops_t* pCamera3CbOpsAPI)
{
CAMX_ENTRYEXIT_SCOPE(CamxLogGroupHAL, SCOPEEventHAL3Initialize);

CamxResult result = CamxResultSuccess;

CAMX_ASSERT(NULL != pCamera3DeviceAPI);
CAMX_ASSERT(NULL != pCamera3DeviceAPI->priv);

CAMX_LOG_INFO(CamxLogGroupHAL, "initialize(): %p, %p", pCamera3DeviceAPI, pCamera3CbOpsAPI);

if ((NULL != pCamera3DeviceAPI) &&
(NULL != pCamera3DeviceAPI->priv))
{
HALDevice* pHALDevice = GetHALDevice(pCamera3DeviceAPI);
pHALDevice->SetCallbackOps(pCamera3CbOpsAPI);

// initialize thermal after hal callback is set
const StaticSettings* pStaticSettings = HwEnvironment::GetInstance()->GetStaticSettings();

if (TRUE == pStaticSettings->enableThermalMitigation)
{
ThermalManager* pThermalManager = HAL3Module::GetInstance()->GetThermalManager();
if (NULL != pThermalManager)
{
CamxResult resultThermalReg = pThermalManager->RegisterHALDevice(pHALDevice);
if (CamxResultEResource == resultThermalReg)
{
result = resultThermalReg;
}
// else Ignore result even if it fails. We don't want camera to fail due to any issues with initializing the
// thermal engine
}
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument(s) for initialize()");
// HAL interface requires -ENODEV (EFailed) if initialization fails for any reason, including invalid arguments.
result = CamxResultEFailed;
}

return Utils::CamxResultToErrno(result);
}

4.2.4 总结

  • open

该方法是camera_module_t的标准方法,主要用来获取camera3_device_t设备结构体的,CamX-CHI对其进行了实现,open方法中完成的工作主要有以下几个:

  1. 将当前camera id传入CHI中进行remap操作,当然这个remap操作逻辑完全是根据CHI中用户需求来的,用户可以根据自己的需要在CHI中加入自定义remap逻辑。
  2. 实例化HALDevice对象,其构造函数中调用Initialize方法,该方法会填充CamX中自定义的Camera3Device结构体。
  3. 将m_HALCallbacks.process_capture_result指向了本地方法ProcessCaptureResult以及m_HALCallbacks.notify_result指向了本地方法Notify(之后会在配置数据流的过程中,将m_HALCallbacks注册到CHI中, 一旦当CHI数据处理完成之后,便会通过这两个回调方法将数据或者事件回传给CamX)。
  4. 最后将HALDevice 中的Camera3Device成员变量作为返回值给到Provider中的CameraCaptureSession中。

Camera3Device 其实重定义了camera3_device_t,其中HwDevice对应于camera3_device_t中的hw_device_t,Camera3DeviceOps对应于camera3_device_ops_t,而在HALDevice的初始化过程中,会将CamX实现的HAL3接口的结构体g_camera3DeviceOps赋值给Camera3DeviceOps中。

  • initialize

该方法在调用open后紧接着被调用,主要用于将上层的回调接口传入HAL中,一旦有数据或者事件产生,CamX便会通过这些回调接口将数据或者事件上传至调用者,其内部的实现较为简单。

initialize方法中有两个参数,分别是之前通过open方法获取的camera3_device_t结构体和实现了camera3_callback_ops_t的CameraDevice,很显然camera3_device_t结构体并不是重点,所以该方法的主要工作是将camera3_callback_ops_t与CamX关联上,一旦数据准备完成便通过这里camera3_callback_ops_t中回调方法将数据回传到Camera Provider中的CameraDevice中,基本流程可以总结为以下几点:

  1. 实例化了一个Camera3CbOpsRedirect对象并将其加入了g_HAL3Entry.m_cbOpsList队列中,这样方便之后需要的时候能够顺利拿到该对象。
  2. 将本地的process_capture_result以及notify方法地址分别赋值给Camera3CbOpsRedirect.cbOps中的process_capture_result以及notify函数指针。
  3. 将上层传入的回调方法结构体指针pCamera3CbOpsAPI赋值给Camera3CbOpsRedirect.pCbOpsAPI,并将Camera3CbOpsRedirect.cbOps赋值给pCamera3CbOpsAPI,通过JumpTableHal3的initialize方法将pCamera3CbOpsAPI传给HALDevice中的m_pCamera3CbOps成员变量,这样HALDevice中的m_pCamera3CbOps就指向了CamX中本地方法process_capture_result以及notify。

经过这样的一番操作之后,一旦CHI有数据传入便会首先进入到本地方法ProcessCaptureResult,然后在该方法中获取到HALDevice的成员变量m_pCamera3CbOps,进而调用m_pCamera3CbOps中的process_capture_result方法,即camxhal3entry.cpp中定义的process_capture_result方法,然后这个方法中会去调用JumpTableHAL3.process_capture_result方法,该方法最终会去调用Camera3CbOpsRedirect.pCbOpsAPI中的process_capture_result方法,这样就调到从Provider传入的回调方法,将数据顺利给到了CameraCaptureSession中。

4.3 配置相机设备数据流

在打开相机应用过程中,App在获取并打开相机设备之后,会调用CameraDevice.createCaptureSession来获取CameraDeviceSession,并且通过Camera api v2标准接口,通知Camera Service,调用其CameraDeviceClient.endConfigure方法,在该方法内部又会去通过HIDL接口ICameraDeviceSession::configureStreams_3_4通知Provider开始处理此次配置需求,在Provider内部,会去通过在调用open流程中获取的camera3_device_t结构体的configure_streams方法来将数据流的配置传入CamX-CHI中,之后由CamX-CHI完成对数据流的配置工作,接下来我们来详细分析下CamX-CHI对于该标准HAL3接口 configure_streams的具体实现。

4.3.1 configure_streams

4.3.1.1 camxhal3entry->configure_streams

ops都是用JumpTable来获取到 camxhal3.cpp 中的JumpTableHAL3的跳转

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3entry.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// configure_streams
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int configure_streams(
const struct camera3_device* pCamera3DeviceAPI,
camera3_stream_configuration_t* pStreamConfigsAPI)
{
JumpTableHAL3* pHAL3 = static_cast<JumpTableHAL3*>(g_dispatchHAL3.GetJumpTable());

CAMX_ASSERT(pHAL3);
CAMX_ASSERT(pHAL3->configure_streams);

return pHAL3->configure_streams(pCamera3DeviceAPI, pStreamConfigsAPI);
}
4.3.1.2 camxhal3->configure_streams

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// configure_streams
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
static int configure_streams(
const struct camera3_device* pCamera3DeviceAPI,
camera3_stream_configuration_t* pStreamConfigsAPI)
{
CAMX_ENTRYEXIT_SCOPE(CamxLogGroupHAL, SCOPEEventHAL3ConfigureStreams);

CamxResult result = CamxResultSuccess;

CAMX_ASSERT(NULL != pCamera3DeviceAPI);
CAMX_ASSERT(NULL != pCamera3DeviceAPI->priv);
CAMX_ASSERT(NULL != pStreamConfigsAPI);
CAMX_ASSERT(pStreamConfigsAPI->num_streams > 0);
CAMX_ASSERT(NULL != pStreamConfigsAPI->streams);

if ((NULL != pCamera3DeviceAPI) &&
(NULL != pCamera3DeviceAPI->priv) &&
(NULL != pStreamConfigsAPI) &&
(pStreamConfigsAPI->num_streams > 0) &&
(NULL != pStreamConfigsAPI->streams))
{
CAMX_LOG_INFO(CamxLogGroupHAL, "Number of streams: %d", pStreamConfigsAPI->num_streams);

HALDevice* pHALDevice = GetHALDevice(pCamera3DeviceAPI);

CAMX_LOG_CONFIG(CamxLogGroupHAL, "HalOp: Begin CONFIG: %p, logicalCameraId: %d, cameraId: %d",
pCamera3DeviceAPI, pHALDevice->GetCameraId(), pHALDevice->GetFwCameraId());

uint32_t numStreams = pStreamConfigsAPI->num_streams;
UINT32 logicalCameraId = pHALDevice->GetCameraId();
UINT32 cameraId = pHALDevice->GetFwCameraId();
BINARY_LOG(LogEvent::HAL3_ConfigSetup, numStreams, logicalCameraId, cameraId);
for (UINT32 stream = 0; stream < pStreamConfigsAPI->num_streams; stream++)
{
CAMX_ASSERT(NULL != pStreamConfigsAPI->streams[stream]);

if (NULL == pStreamConfigsAPI->streams[stream])
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument 2 for configure_streams()");
// HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
result = CamxResultEInvalidArg;
break;
}
else
{
camera3_stream_t& rConfigStream = *pStreamConfigsAPI->streams[stream];
BINARY_LOG(LogEvent::HAL3_StreamInfo, rConfigStream);

CAMX_LOG_INFO(CamxLogGroupHAL, " stream[%d] = %p - info:", stream,
pStreamConfigsAPI->streams[stream]);
CAMX_LOG_INFO(CamxLogGroupHAL, " format : %d, %s",
pStreamConfigsAPI->streams[stream]->format,
FormatToString(pStreamConfigsAPI->streams[stream]->format));
CAMX_LOG_INFO(CamxLogGroupHAL, " width : %d",
pStreamConfigsAPI->streams[stream]->width);
CAMX_LOG_INFO(CamxLogGroupHAL, " height : %d",
pStreamConfigsAPI->streams[stream]->height);
CAMX_LOG_INFO(CamxLogGroupHAL, " stream_type : %08x, %s",
pStreamConfigsAPI->streams[stream]->stream_type,
StreamTypeToString(pStreamConfigsAPI->streams[stream]->stream_type));
CAMX_LOG_INFO(CamxLogGroupHAL, " usage : %08x",
pStreamConfigsAPI->streams[stream]->usage);
CAMX_LOG_INFO(CamxLogGroupHAL, " max_buffers : %d",
pStreamConfigsAPI->streams[stream]->max_buffers);
CAMX_LOG_INFO(CamxLogGroupHAL, " rotation : %08x, %s",
pStreamConfigsAPI->streams[stream]->rotation,
RotationToString(pStreamConfigsAPI->streams[stream]->rotation));
CAMX_LOG_INFO(CamxLogGroupHAL, " data_space : %08x, %s",
pStreamConfigsAPI->streams[stream]->data_space,
DataSpaceToString(pStreamConfigsAPI->streams[stream]->data_space));
CAMX_LOG_INFO(CamxLogGroupHAL, " priv : %p",
pStreamConfigsAPI->streams[stream]->priv);
#if (defined(CAMX_ANDROID_API) && (CAMX_ANDROID_API >= 28)) // Android-P or better
CAMX_LOG_INFO(CamxLogGroupHAL, " physical_camera_id : %s",
pStreamConfigsAPI->streams[stream]->physical_camera_id);
#endif // Android-P or better
pStreamConfigsAPI->streams[stream]->reserved[0] = NULL;
pStreamConfigsAPI->streams[stream]->reserved[1] = NULL;
}
}
CAMX_LOG_INFO(CamxLogGroupHAL, " operation_mode: %d", pStreamConfigsAPI->operation_mode);


Camera3StreamConfig* pStreamConfigs = reinterpret_cast<Camera3StreamConfig*>(pStreamConfigsAPI);

result = pHALDevice->ConfigureStreams(pStreamConfigs);

if ((CamxResultSuccess != result) && (CamxResultEInvalidArg != result))
{
// HAL interface requires -ENODEV (EFailed) if a fatal error occurs
result = CamxResultEFailed;
}

if (CamxResultSuccess == result)
{
for (UINT32 stream = 0; stream < pStreamConfigsAPI->num_streams; stream++)
{
CAMX_ASSERT(NULL != pStreamConfigsAPI->streams[stream]);

if (NULL == pStreamConfigsAPI->streams[stream])
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument 2 for configure_streams()");
// HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
result = CamxResultEInvalidArg;
break;
}
else
{
CAMX_LOG_CONFIG(CamxLogGroupHAL, " FINAL stream[%d] = %p - info:", stream,
pStreamConfigsAPI->streams[stream]);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " format : %d, %s",
pStreamConfigsAPI->streams[stream]->format,
FormatToString(pStreamConfigsAPI->streams[stream]->format));
CAMX_LOG_CONFIG(CamxLogGroupHAL, " width : %d",
pStreamConfigsAPI->streams[stream]->width);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " height : %d",
pStreamConfigsAPI->streams[stream]->height);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " stream_type : %08x, %s",
pStreamConfigsAPI->streams[stream]->stream_type,
StreamTypeToString(pStreamConfigsAPI->streams[stream]->stream_type));
CAMX_LOG_CONFIG(CamxLogGroupHAL, " usage : %08x",
pStreamConfigsAPI->streams[stream]->usage);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " max_buffers : %d",
pStreamConfigsAPI->streams[stream]->max_buffers);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " rotation : %08x, %s",
pStreamConfigsAPI->streams[stream]->rotation,
RotationToString(pStreamConfigsAPI->streams[stream]->rotation));
CAMX_LOG_CONFIG(CamxLogGroupHAL, " data_space : %08x, %s",
pStreamConfigsAPI->streams[stream]->data_space,
DataSpaceToString(pStreamConfigsAPI->streams[stream]->data_space));
CAMX_LOG_CONFIG(CamxLogGroupHAL, " priv : %p",
pStreamConfigsAPI->streams[stream]->priv);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " reserved[0] : %p",
pStreamConfigsAPI->streams[stream]->reserved[0]);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " reserved[1] : %p",
pStreamConfigsAPI->streams[stream]->reserved[1]);

Camera3HalStream* pHalStream =
reinterpret_cast<Camera3HalStream*>(pStreamConfigsAPI->streams[stream]->reserved[0]);
if (pHalStream != NULL)
{
if (TRUE == HwEnvironment::GetInstance()->GetStaticSettings()->enableHALFormatOverride)
{
pStreamConfigsAPI->streams[stream]->format =
static_cast<HALPixelFormat>(pHalStream->overrideFormat);
}
CAMX_LOG_CONFIG(CamxLogGroupHAL,
" pHalStream: %p format : 0x%x, overrideFormat : 0x%x consumer usage: %llx,"
" producer usage: %llx",
pHalStream, pStreamConfigsAPI->streams[stream]->format,
pHalStream->overrideFormat, pHalStream->consumerUsage, pHalStream->producerUsage);
}
}
}
}
CAMX_LOG_CONFIG(CamxLogGroupHAL, "HalOp: End CONFIG, logicalCameraId: %d, cameraId: %d",
pHALDevice->GetCameraId(), pHALDevice->GetFwCameraId());
}
else
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument(s) for configure_streams()");
// HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
result = CamxResultEInvalidArg;
}

return Utils::CamxResultToErrno(result);
}
4.3.1.3 HALDevice::ConfigureStreams

[->vendor\qcom\proprietary\camx\src\core\hal\camxhaldevice.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
CamxResult HALDevice::ConfigureStreams(
Camera3StreamConfig* pStreamConfigs)
{
CamxResult result = CamxResultSuccess;

// Validate the incoming stream configurations
result = CheckValidStreamConfig(pStreamConfigs);

if ((StreamConfigModeConstrainedHighSpeed == pStreamConfigs->operationMode) ||
(StreamConfigModeSuperSlowMotionFRC == pStreamConfigs->operationMode))
{
SearchNumBatchedFrames (pStreamConfigs, &m_usecaseNumBatchedFrames, &m_FPSValue);
CAMX_ASSERT(m_usecaseNumBatchedFrames > 1);
}
else
{
// Not a HFR usecase batch frames value need to set to 1.
m_usecaseNumBatchedFrames = 1;
}

if (CamxResultSuccess == result)
{
if (TRUE == m_bCHIModuleInitialized)
{
GetCHIAppCallbacks()->chi_teardown_override_session(reinterpret_cast<camera3_device*>(&m_camera3Device), 0, NULL);
ReleaseStreamConfig();
DeInitRequestLogger();
}

m_bCHIModuleInitialized = CHIModuleInitialize(pStreamConfigs);

ClearFrameworkRequestBuffer();

if (FALSE == m_bCHIModuleInitialized)
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "CHI Module failed to configure streams");
result = CamxResultEFailed;
}
else
{
result = SaveStreamConfig(pStreamConfigs);
result = InitializeRequestLogger();
CAMX_LOG_VERBOSE(CamxLogGroupHAL, "CHI Module configured streams ... CHI is in control!");

if (CamxResultSuccess == result)
{
if (m_tunnellingEnabled)
{
// Create tunnelling layer
// TODO: App should provide details as z order, dest. rect., etc.
LayerInfo layer;

for (UINT i = 0; i < pStreamConfigs->numStreams; i++)
{
if (StreamTypeOutput == pStreamConfigs->ppStreams[i]->streamType &&
HALPixelFormatImplDefined == pStreamConfigs->ppStreams[i]->format &&
((GrallocUsageHwComposer & pStreamConfigs->ppStreams[i]->grallocUsage) ||
(GrallocUsageHwTexture & pStreamConfigs->ppStreams[i]->grallocUsage)))
{
Camera3Stream* stream = pStreamConfigs->ppStreams[i];
layer.width = stream->width;
layer.height = stream->height;
layer.dataspace = (int32_t)stream->dataspace;
layer.rotation = (int32_t)stream->rotation;
//layer.format is not used for now

result = DisplayConfigInterface::GetInstance()->CreateLayer(layer);
if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Create layer failed with %d", result);
return result;
}
break;
}
}
}
}
}
}

return result;
}

如果之前有过配流的操作,m_bCHIModuleInitialized会被赋值,然后销毁 session的操作,调用CHIModuleInitialize函数操作

4.3.1.4 HALDevice::CHIModuleInitialize
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
BOOL HALDevice::CHIModuleInitialize(
Camera3StreamConfig* pStreamConfigs)
{
BOOL isOverrideEnabled = FALSE;

if (TRUE == HAL3Module::GetInstance()->IsCHIOverrideModulePresent())
{
/// @todo (CAMX-1518) Handle private data from Override module
VOID* pPrivateData;
chi_hal_callback_ops_t* pCHIAppCallbacks = GetCHIAppCallbacks();

pCHIAppCallbacks->chi_initialize_override_session(GetCameraId(),
reinterpret_cast<const camera3_device_t*>(&m_camera3Device),
&m_HALCallbacks,
reinterpret_cast<camera3_stream_configuration_t*>(pStreamConfigs),
&isOverrideEnabled,
&pPrivateData);
}

return isOverrideEnabled;
}

[->vendor\qcom\proprietary\chi-cdk\core\chiframework\chxextensioninterface.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
static CDKResult chi_initialize_override_session(
uint32_t cameraId,
const camera3_device_t* camera3_device,
const chi_hal_ops_t* chiHalOps,
camera3_stream_configuration_t* stream_config,
int* override_config,
void** priv)
{
ExtensionModule* pExtensionModule = ExtensionModule::GetInstance();

pExtensionModule->InitializeOverrideSession(cameraId, camera3_device, chiHalOps, stream_config, override_config, priv);

return CDKResultSuccess;
}
4.3.1.5 ExtensionModule::InitializeOverrideSession

[->vendor\qcom\proprietary\chi-cdk\core\chiframework\chxextensionmodule.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
CDKResult ExtensionModule::InitializeOverrideSession(
uint32_t logicalCameraId,
const camera3_device_t* pCamera3Device,
const chi_hal_ops_t* chiHalOps,
camera3_stream_configuration_t* pStreamConfig,
int* pIsOverrideEnabled,
VOID** pPrivate)
{
CDKResult result = CDKResultSuccess;
UINT32 modeCount = 0;
ChiSensorModeInfo* pAllModes = NULL;
UINT32 fps = *m_pDefaultMaxFPS;
BOOL isVideoMode = FALSE;
uint32_t operation_mode;
static BOOL fovcModeCheck = EnableFOVCUseCase();
UsecaseId selectedUsecaseId = UsecaseId::NoMatch;
UINT minSessionFps = 0;
UINT maxSessionFps = 0;
CDKResult tagOpResult = CDKResultEFailed;
ChiBLMParams blmParams;

*pPrivate = NULL;
*pIsOverrideEnabled = FALSE;
m_aFlushInProgress[logicalCameraId] = FALSE;
m_firstResult = FALSE;
m_hasFlushOccurred[logicalCameraId] = FALSE;
blmParams.height = 0;
blmParams.width = 0;

if (NULL == m_hCHIContext)
{
m_hCHIContext = g_chiContextOps.pOpenContext();
}

ChiVendorTagsOps vendorTagOps = { 0 };
g_chiContextOps.pTagOps(&vendorTagOps);
operation_mode = pStreamConfig->operation_mode >> 16;
operation_mode = operation_mode & 0x000F;
pStreamConfig->operation_mode = pStreamConfig->operation_mode & 0xFFFF;

UINT numOutputStreams = 0;
for (UINT32 stream = 0; stream < pStreamConfig->num_streams; stream++)
{
if (0 != (pStreamConfig->streams[stream]->usage & GrallocUsageHwVideoEncoder))
{
isVideoMode = TRUE;

if((pStreamConfig->streams[stream]->height * pStreamConfig->streams[stream]->width) >
(blmParams.height * blmParams.width))
{
blmParams.height = pStreamConfig->streams[stream]->height;
blmParams.width = pStreamConfig->streams[stream]->width;
}
}

if (CAMERA3_STREAM_OUTPUT == pStreamConfig->streams[stream]->stream_type)
{
numOutputStreams++;
}

//If video stream not present in that case store Preview/Snapshot Stream info
if((pStreamConfig->streams[stream]->height > blmParams.height) &&
(pStreamConfig->streams[stream]->width > blmParams.width) &&
(isVideoMode == FALSE))
{
blmParams.height = pStreamConfig->streams[stream]->height;
blmParams.width = pStreamConfig->streams[stream]->width;
}
}

if (numOutputStreams > MaxExternalBuffers)
{
CHX_LOG_ERROR("numOutputStreams(%u) greater than MaxExternalBuffers(%u)", numOutputStreams, MaxExternalBuffers);
result = CDKResultENotImplemented;
}

if ((isVideoMode == TRUE) && (operation_mode != 0))
{
UINT32 numSensorModes = m_logicalCameraInfo[logicalCameraId].m_cameraCaps.numSensorModes;
CHISENSORMODEINFO* pAllSensorModes = m_logicalCameraInfo[logicalCameraId].pSensorModeInfo;

if ((operation_mode - 1) >= numSensorModes)
{
result = CDKResultEOverflow;
CHX_LOG_ERROR("operation_mode: %d, numSensorModes: %d", operation_mode, numSensorModes);
}
else
{
fps = pAllSensorModes[operation_mode - 1].frameRate;
}
}

if (CDKResultSuccess == result)
{
#if defined(CAMX_ANDROID_API) && (CAMX_ANDROID_API >= 28) //Android-P or better
camera_metadata_t* metadata = const_cast<camera_metadata_t*>(pStreamConfig->session_parameters);

camera_metadata_entry_t entry = { 0 };

// The client may choose to send NULL sesssion parameter, which is fine. For example, torch mode
// will have NULL session param.
if (metadata != NULL)
{
entry.tag = ANDROID_CONTROL_AE_TARGET_FPS_RANGE;

int ret = find_camera_metadata_entry(metadata, entry.tag, &entry);

if(ret == 0) {
minSessionFps = entry.data.i32[0];
maxSessionFps = entry.data.i32[1];
m_usecaseMaxFPS = maxSessionFps;
}
}

CHITAGSOPS tagOps = { 0 };
UINT32 tagLocation = 0;

g_chiContextOps.pTagOps(&tagOps);

tagOpResult = tagOps.pQueryVendorTagLocation(
"org.codeaurora.qcamera3.sessionParameters",
"availableStreamMap",
&tagLocation);

if (CDKResultSuccess == tagOpResult)
{
camera_metadata_entry_t entry = { 0 };

if (metadata != NULL)
{
int ret = find_camera_metadata_entry(metadata, tagLocation, &entry);
}
}

tagOpResult = tagOps.pQueryVendorTagLocation(
"org.codeaurora.qcamera3.sessionParameters",
"overrideResourceCostValidation",
&tagLocation);

if ((NULL != metadata) && (CDKResultSuccess == tagOpResult))
{
camera_metadata_entry_t resourcecostEntry = { 0 };

if (0 == find_camera_metadata_entry(metadata, tagLocation, &resourcecostEntry))
{
BOOL bypassRCV = static_cast<BOOL>(resourcecostEntry.data.u8[0]);

if (TRUE == bypassRCV)
{
m_pResourcesUsedLock->Lock();
m_logicalCameraRCVBypassSet.insert(logicalCameraId);
m_pResourcesUsedLock->Unlock();
}
}
}

#endif

CHIHANDLE staticMetaDataHandle = const_cast<camera_metadata_t*>(
m_logicalCameraInfo[logicalCameraId].m_cameraInfo.static_camera_characteristics);
UINT32 metaTagPreviewFPS = 0;
UINT32 metaTagVideoFPS = 0;

m_previewFPS = 0;
m_videoFPS = 0;
GetInstance()->GetVendorTagOps(&vendorTagOps);

result = vendorTagOps.pQueryVendorTagLocation("org.quic.camera2.streamBasedFPS.info", "PreviewFPS",
&metaTagPreviewFPS);
if (CDKResultSuccess == result)
{
vendorTagOps.pGetMetaData(staticMetaDataHandle, metaTagPreviewFPS, &m_previewFPS,
sizeof(m_previewFPS));
}

result = vendorTagOps.pQueryVendorTagLocation("org.quic.camera2.streamBasedFPS.info", "VideoFPS", &metaTagVideoFPS);
if (CDKResultSuccess == result)
{
vendorTagOps.pGetMetaData(staticMetaDataHandle, metaTagVideoFPS, &m_videoFPS,
sizeof(m_videoFPS));
}

if ((StreamConfigModeConstrainedHighSpeed == pStreamConfig->operation_mode) ||
(StreamConfigModeSuperSlowMotionFRC == pStreamConfig->operation_mode))
{
if ((StreamConfigModeConstrainedHighSpeed == pStreamConfig->operation_mode) &&
(30 >= maxSessionFps))
{
minSessionFps = DefaultFrameRateforHighSpeedSession;
maxSessionFps = DefaultFrameRateforHighSpeedSession;
m_usecaseMaxFPS = maxSessionFps;

CHX_LOG_INFO("minSessionFps = %d maxSessionFps = %d", minSessionFps, maxSessionFps);
}

SearchNumBatchedFrames(logicalCameraId, pStreamConfig,
&m_usecaseNumBatchedFrames, &m_HALOutputBufferCombined,
&m_usecaseMaxFPS, maxSessionFps);
if (480 > m_usecaseMaxFPS)
{
m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE_HFR;
}
else
{
// For 480FPS or higher, require more aggresive power hint
m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE_HFR_480FPS;
}
}
else
{
// Not a HFR usecase, batch frames value need to be set to 1.
m_usecaseNumBatchedFrames = 1;
m_HALOutputBufferCombined = FALSE;
if (maxSessionFps == 0)
{
m_usecaseMaxFPS = fps;
}
if (TRUE == isVideoMode)
{
if (30 >= m_usecaseMaxFPS)
{
m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE;
}
else
{
m_CurrentpowerHint = PERF_LOCK_POWER_HINT_VIDEO_ENCODE_60FPS;
}
}
else
{
m_CurrentpowerHint = PERF_LOCK_POWER_HINT_PREVIEW;
}
}

if ((NULL != m_pPerfLockManager[logicalCameraId]) && (m_CurrentpowerHint != m_previousPowerHint))
{
m_pPerfLockManager[logicalCameraId]->ReleasePerfLock(m_previousPowerHint);
}

// Example [B == batch]: (240 FPS / 4 FPB = 60 BPS) / 30 FPS (Stats frequency goal) = 2 BPF i.e. skip every other stats
*m_pStatsSkipPattern = m_usecaseMaxFPS / m_usecaseNumBatchedFrames / 30;
if (*m_pStatsSkipPattern < 1)
{
*m_pStatsSkipPattern = 1;
}

m_VideoHDRMode = (StreamConfigModeVideoHdr == pStreamConfig->operation_mode);

m_torchWidgetUsecase = (StreamConfigModeQTITorchWidget == pStreamConfig->operation_mode);

// this check is introduced to avoid set *m_pEnableFOVC == 1 if fovcEnable is disabled in
// overridesettings & fovc bit is set in operation mode.
// as well as to avoid set,when we switch Usecases.
if (TRUE == fovcModeCheck)
{
*m_pEnableFOVC = ((pStreamConfig->operation_mode & StreamConfigModeQTIFOVC) == StreamConfigModeQTIFOVC) ? 1 : 0;
}

SetHALOps(logicalCameraId, chiHalOps);

m_logicalCameraInfo[logicalCameraId].m_pCamera3Device = pCamera3Device;

selectedUsecaseId = m_pUsecaseSelector->GetMatchingUsecase(&m_logicalCameraInfo[logicalCameraId],
pStreamConfig);

CHX_LOG_CONFIG("Session_parameters FPS range %d:%d, previewFPS %d, videoFPS %d "
"BatchSize: %u HALOutputBufferCombined %d FPS: %u SkipPattern: %u, "
"cameraId = %d selected use case = %d",
minSessionFps,
maxSessionFps,
m_previewFPS,
m_videoFPS,
m_usecaseNumBatchedFrames,
m_HALOutputBufferCombined,
m_usecaseMaxFPS,
*m_pStatsSkipPattern,
logicalCameraId,
selectedUsecaseId);

// FastShutter mode supported only in ZSL usecase.
if ((pStreamConfig->operation_mode == StreamConfigModeFastShutter) &&
(UsecaseId::PreviewZSL != selectedUsecaseId))
{
pStreamConfig->operation_mode = StreamConfigModeNormal;
}
m_operationMode[logicalCameraId] = pStreamConfig->operation_mode;
}

if (m_pBLMClient != NULL)
{
blmParams.numcamera = m_logicalCameraInfo[logicalCameraId].numPhysicalCameras;
blmParams.logicalCameraType = m_logicalCameraInfo[logicalCameraId].logicalCameraType;
blmParams.FPS = m_usecaseMaxFPS;
blmParams.selectedusecaseId = selectedUsecaseId;
blmParams.socId = GetPlatformID();
blmParams.isVideoMode = isVideoMode;

m_pBLMClient->SetUsecaseBwLevel(blmParams);
}

if (UsecaseId::NoMatch != selectedUsecaseId)
{
m_pStreamConfig[logicalCameraId] = static_cast<camera3_stream_configuration_t*>(
CHX_CALLOC(sizeof(camera3_stream_configuration_t)));
m_pStreamConfig[logicalCameraId]->streams = static_cast<camera3_stream_t**>(
CHX_CALLOC(sizeof(camera3_stream_t*) * pStreamConfig->num_streams));
m_pStreamConfig[logicalCameraId]->num_streams = pStreamConfig->num_streams;

for (UINT32 i = 0; i< m_pStreamConfig[logicalCameraId]->num_streams; i++)
{
m_pStreamConfig[logicalCameraId]->streams[i] = pStreamConfig->streams[i];
}

m_pStreamConfig[logicalCameraId]->operation_mode = pStreamConfig->operation_mode;

if (NULL != pStreamConfig->session_parameters)
{
m_pStreamConfig[logicalCameraId]->session_parameters =
(const camera_metadata_t *)allocate_copy_camera_metadata_checked(
pStreamConfig->session_parameters,
get_camera_metadata_size(pStreamConfig->session_parameters));
}

m_pSelectedUsecase[logicalCameraId] =
m_pUsecaseFactory->CreateUsecaseObject(&m_logicalCameraInfo[logicalCameraId],
selectedUsecaseId, m_pStreamConfig[logicalCameraId]);

if (NULL != m_pSelectedUsecase[logicalCameraId])
{
// use camera device / used for recovery only for regular session
m_SelectedUsecaseId[logicalCameraId] = (UINT32)selectedUsecaseId;
CHX_LOG_CONFIG("Logical cam Id = %d usecase addr = %p", logicalCameraId, m_pSelectedUsecase[
logicalCameraId]);

m_pCameraDeviceInfo[logicalCameraId].m_pCamera3Device = pCamera3Device;

*pIsOverrideEnabled = TRUE;

m_TeardownInProgress[logicalCameraId] = FALSE;
m_RecoveryInProgress[logicalCameraId] = FALSE;
m_terminateRecoveryThread[logicalCameraId] = FALSE;

m_pPCRLock[logicalCameraId] = Mutex::Create();
m_pDestroyLock[logicalCameraId] = Mutex::Create();
m_pRecoveryLock[logicalCameraId] = Mutex::Create();
m_pTriggerRecoveryLock[logicalCameraId] = Mutex::Create();
m_pTriggerRecoveryCondition[logicalCameraId] = Condition::Create();
m_pRecoveryCondition[logicalCameraId] = Condition::Create();
m_recoveryThreadPrivateData[logicalCameraId] = { logicalCameraId, this };

// Create recovery thread and wait on being signaled
m_pRecoveryThread[logicalCameraId].pPrivateData = &m_recoveryThreadPrivateData[logicalCameraId];

result = ChxUtils::ThreadCreate(ExtensionModule::RecoveryThread,
&m_pRecoveryThread[logicalCameraId],
&m_pRecoveryThread[logicalCameraId].hThreadHandle);
if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Failed to create recovery thread for logical camera %d result %d", logicalCameraId, result);
}
}
else
{
CHX_LOG_ERROR("For cameraId = %d CreateUsecaseObject failed", logicalCameraId);
m_logicalCameraInfo[logicalCameraId].m_pCamera3Device = NULL;

// Free m_pStreamConfig
if (NULL != m_pStreamConfig[logicalCameraId])
{
if (NULL != m_pStreamConfig[logicalCameraId]->streams)
{
CHX_FREE(m_pStreamConfig[logicalCameraId]->streams);
m_pStreamConfig[logicalCameraId]->streams = NULL;
}
if (NULL != m_pStreamConfig[logicalCameraId]->session_parameters)
{
free_camera_metadata(const_cast<camera_metadata_t*>(m_pStreamConfig[logicalCameraId]->session_parameters));
m_pStreamConfig[logicalCameraId]->session_parameters = NULL;
}
CHX_FREE(m_pStreamConfig[logicalCameraId]);
m_pStreamConfig[logicalCameraId] = NULL;
}
}
}

if ((CDKResultSuccess != result) || (UsecaseId::Torch == selectedUsecaseId))
{
// reset resource count in failure case or Torch case
ResetResourceCost(m_logicalCameraInfo[logicalCameraId].cameraId);
}

CHX_LOG_INFO(" logicalCameraId = %d, m_totalResourceBudget = %d, activeResourseCost = %d, m_IFEResourceCost = %d",
logicalCameraId, m_totalResourceBudget, GetActiveResourceCost(), m_IFEResourceCost[logicalCameraId]);

return result;
}

判断是否是视频模式,做帧率的操作; 根据logicalCameraId 匹配 usecase,根据usecaseId 创建usecase。

4.3.2 GetMatchingUsecase

[->vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxusecaseutils.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
UsecaseId UsecaseSelector::GetMatchingUsecase(
const LogicalCameraInfo* pCamInfo,
camera3_stream_configuration_t* pStreamConfig)
{
UsecaseId usecaseId = UsecaseId::Default;
UINT32 VRDCEnable = ExtensionModule::GetInstance()->GetDCVRMode();
if ((pStreamConfig->num_streams == 2) && IsQuadCFASensor(pCamInfo, NULL) &&
(LogicalCameraType_Default == pCamInfo->logicalCameraType))
{
// need to validate preview size <= binning size, otherwise return error

/// If snapshot size is less than sensor binning size, select defaut zsl usecase.
/// Only if snapshot size is larger than sensor binning size, select QuadCFA usecase.
/// Which means for snapshot in QuadCFA usecase,
/// - either do upscale from sensor binning size,
/// - or change sensor mode to full size quadra mode.
if (TRUE == QuadCFAMatchingUsecase(pCamInfo, pStreamConfig))
{
usecaseId = UsecaseId::QuadCFA;
CHX_LOG_CONFIG("Quad CFA usecase selected");
return usecaseId;
}
}

if (pStreamConfig->operation_mode == StreamConfigModeSuperSlowMotionFRC)
{
usecaseId = UsecaseId::SuperSlowMotionFRC;
CHX_LOG_CONFIG("SuperSlowMotionFRC usecase selected");
return usecaseId;
}

/// Reset the usecase flags
VideoEISV2Usecase = 0;
VideoEISV3Usecase = 0;
GPURotationUsecase = FALSE;
GPUDownscaleUsecase = FALSE;
CHX_LOG_CONFIG("numPhysicalCameras: %d, logicalCameraType: %d",
pCamInfo->numPhysicalCameras, pCamInfo->logicalCameraType);

CHX_LOG_CONFIG("AIDirectorEnable = %d pStreamConfig->num_streams %d", ExtensionModule::GetInstance()->EnableAIDirector(), pStreamConfig->num_streams);

if ((NULL != pCamInfo) && (pCamInfo->numPhysicalCameras > 1) && VRDCEnable)
{
CHX_LOG_CONFIG("MultiCameraVR usecase selected");
usecaseId = UsecaseId::MultiCameraVR;
}

else if ((NULL != pCamInfo) && (pCamInfo->numPhysicalCameras > 1) &&
(pStreamConfig->num_streams > 1 || pCamInfo->logicalCameraType == LogicalCameraType_SAT))
{
CHX_LOG_CONFIG("MultiCamera usecase selected");
usecaseId = UsecaseId::MultiCamera;
}
else
{
CHX_LOG_CONFIG("default usecase selected");
SnapshotStreamConfig snapshotStreamConfig;
CHISTREAM** ppChiStreams = reinterpret_cast<CHISTREAM**>(pStreamConfig->streams);
switch (pStreamConfig->num_streams)
{
case 2:
if (TRUE == IsRawJPEGStreamConfig(pStreamConfig))
{
CHX_LOG_CONFIG("Raw + JPEG usecase selected");
usecaseId = UsecaseId::RawJPEG;
break;
}

/// @todo Enable ZSL by setting overrideDisableZSL to FALSE
/// @todo Because lt6911uxc can not be compatible with ZSL at prsent,
/// this is workaround to disable ZSL for lt6911uxc.
if (FALSE == m_pExtModule->DisableZSL() && strcmp("lt6911uxc",pCamInfo->m_cameraCaps.sensorCaps.pSensorName))
{
if (TRUE == IsPreviewZSLStreamConfig(pStreamConfig))
{
usecaseId = UsecaseId::PreviewZSL;
CHX_LOG_CONFIG("ZSL usecase selected");
}
}

if(TRUE == m_pExtModule->UseGPURotationUsecase())
{
CHX_LOG_CONFIG("GPU Rotation usecase flag set");
GPURotationUsecase = TRUE;
}

if (TRUE == m_pExtModule->UseGPUDownscaleUsecase())
{
CHX_LOG_CONFIG("GPU Downscale usecase flag set");
GPUDownscaleUsecase = TRUE;
}

if (TRUE == m_pExtModule->EnableMFNRUsecase())
{
if (TRUE == MFNRMatchingUsecase(pStreamConfig))
{
usecaseId = UsecaseId::MFNR;
CHX_LOG_CONFIG("MFNR usecase selected");
}
}

if (TRUE == m_pExtModule->EnableHFRNo3AUsecas())
{
CHX_LOG_CONFIG("HFR without 3A usecase flag set");
HFRNo3AUsecase = TRUE;
}

break;

case 3:
VideoEISV2Usecase = m_pExtModule->EnableEISV2Usecase();
VideoEISV3Usecase = m_pExtModule->EnableEISV3Usecase();
if (FALSE == m_pExtModule->DisableZSL() && (TRUE == IsPreviewZSLStreamConfig(pStreamConfig)))
{
usecaseId = UsecaseId::PreviewZSL;
CHX_LOG_CONFIG("ZSL usecase selected");
}
else if(TRUE == IsRawJPEGStreamConfig(pStreamConfig))
{
CHX_LOG_CONFIG("Raw + JPEG usecase selected");
usecaseId = UsecaseId::RawJPEG;
}
else if((FALSE == IsVideoEISV2Enabled(pStreamConfig)) && (FALSE == IsVideoEISV3Enabled(pStreamConfig)) &&
(TRUE == IsVideoLiveShotConfig(pStreamConfig)) && (FALSE == m_pExtModule->DisableZSL()))
{
CHX_LOG_CONFIG("Video With Liveshot, ZSL usecase selected");
usecaseId = UsecaseId::VideoLiveShot;
}
// Because LT6911 don't support ZSL.
if (!strcmp("lt6911uxc",pCamInfo->m_cameraCaps.sensorCaps.pSensorName))
{
CHX_LOG_CONFIG("Specific case for LT6911");
usecaseId = UsecaseId::Default;
}

break;

case 4:
GetSnapshotStreamConfiguration(pStreamConfig->num_streams, ppChiStreams, snapshotStreamConfig);
if ((SnapshotStreamType::HEIC == snapshotStreamConfig.type) && (NULL != snapshotStreamConfig.pRawStream))
{
CHX_LOG_CONFIG("Raw + HEIC usecase selected");
usecaseId = UsecaseId::RawJPEG;
}
break;

default:
CHX_LOG_CONFIG("Default usecase selected");
break;

}
}

if (TRUE == ExtensionModule::GetInstance()->IsTorchWidgetUsecase())
{
CHX_LOG_CONFIG("Torch widget usecase selected");
usecaseId = UsecaseId::Torch;
}

CHX_LOG_INFO("usecase ID:%d",usecaseId);
return usecaseId;
}

根据 stream_number、operation_mode、numPhysicalCameras来判断选择哪个usecaseId

4.3.2.1 CreateUsecaseObject

[->vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxusecaseutils.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// UsecaseFactory::CreateUsecaseObject
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Usecase* UsecaseFactory::CreateUsecaseObject(
LogicalCameraInfo* pLogicalCameraInfo, ///< camera info
UsecaseId usecaseId, ///< Usecase Id
camera3_stream_configuration_t* pStreamConfig) ///< Stream config
{
Usecase* pUsecase = NULL;
UINT camera0Id = pLogicalCameraInfo->ppDeviceInfo[0]->cameraId;

switch (usecaseId)
{
case UsecaseId::PreviewZSL:
case UsecaseId::VideoLiveShot:
pUsecase = AdvancedCameraUsecase::Create(pLogicalCameraInfo, pStreamConfig, usecaseId);
break;
case UsecaseId::MultiCamera:
{
#if defined(CAMX_ANDROID_API) && (CAMX_ANDROID_API >= 28) //Android-P or better

LogicalCameraType logicalCameraType = m_pExtModule->GetCameraType(pLogicalCameraInfo->cameraId);

if (LogicalCameraType_DualApp == logicalCameraType)
{
pUsecase = UsecaseDualCamera::Create(pLogicalCameraInfo, pStreamConfig);
}
else
#endif
{
pUsecase = UsecaseMultiCamera::Create(pLogicalCameraInfo, pStreamConfig);
}
break;
}
case UsecaseId::MultiCameraVR:
//pUsecase = UsecaseMultiVRCamera::Create(pLogicalCameraInfo, pStreamConfig);
break;
case UsecaseId::QuadCFA:
pUsecase = AdvancedCameraUsecase::Create(pLogicalCameraInfo, pStreamConfig, usecaseId);
break;
case UsecaseId::Torch:
pUsecase = UsecaseTorch::Create(pLogicalCameraInfo, pStreamConfig);
break;
#if (!defined(LE_CAMERA)) // SuperSlowMotion not supported in LE
case UsecaseId::SuperSlowMotionFRC:
pUsecase = UsecaseSuperSlowMotionFRC::Create(pLogicalCameraInfo, pStreamConfig);
break;
#endif
default:
pUsecase = AdvancedCameraUsecase::Create(pLogicalCameraInfo, pStreamConfig, usecaseId);
break;
}

return pUsecase;
}

根据usecaseId选择,创建不同的usecase(AdvancedCameraUsecase :ZSL、VideoLiveShot,UsecaseDualCamera:双摄,UsecaseMultiCamera:多摄,UsecaseQuadCFA:四合一、九合一处理,UsecaseTorch:闪光灯模式,UsecaseSuperSlowMotionFRC:慢动作)。

4.3.2.2 AdvancedCameraUsecase::Create

[->vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxadvancedcamerausecase.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// AdvancedCameraUsecase::Create
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
AdvancedCameraUsecase* AdvancedCameraUsecase::Create(
LogicalCameraInfo* pCameraInfo, ///< Camera info
camera3_stream_configuration_t* pStreamConfig, ///< Stream configuration
UsecaseId usecaseId) ///< Identifier for usecase function
{
CDKResult result = CDKResultSuccess;
AdvancedCameraUsecase* pAdvancedCameraUsecase = CHX_NEW AdvancedCameraUsecase;

if ((NULL != pAdvancedCameraUsecase) && (NULL != pStreamConfig))
{
result = pAdvancedCameraUsecase->Initialize(pCameraInfo, pStreamConfig, usecaseId);

if (CDKResultSuccess != result)
{
pAdvancedCameraUsecase->Destroy(FALSE);
pAdvancedCameraUsecase = NULL;
}
}
else
{
result = CDKResultEFailed;
}

return pAdvancedCameraUsecase;
}
4.3.2.3 AdvancedCameraUsecase::Initialize
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// AdvancedCameraUsecase::Initialize
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult AdvancedCameraUsecase::Initialize(
LogicalCameraInfo* pCameraInfo, ///< Camera info
camera3_stream_configuration_t* pStreamConfig, ///< Stream configuration
UsecaseId usecaseId) ///< Identifier for the usecase function
{
ATRACE_BEGIN("AdvancedCameraUsecase::Initialize");
CDKResult result = CDKResultSuccess;

m_usecaseId = usecaseId;
m_cameraId = pCameraInfo->cameraId;
m_pLogicalCameraInfo = pCameraInfo;

m_pResultMutex = Mutex::Create();
m_pSetFeatureMutex = Mutex::Create();
m_pRealtimeReconfigDoneMutex = Mutex::Create();
m_isReprocessUsecase = FALSE;
m_numOfPhysicalDevices = pCameraInfo->numPhysicalCameras;
m_isUsecaseCloned = FALSE;
m_numPCRsBeforeStreamOn = ExtensionModule::GetInstance()->GetNumPCRsBeforeStreamOn(m_cameraId);

for (UINT32 i = 0 ; i < m_numOfPhysicalDevices; i++)
{
m_cameraIdMap[i] = pCameraInfo->ppDeviceInfo[i]->cameraId;
}

ExtensionModule::GetInstance()->GetVendorTagOps(&m_vendorTagOps);
CHX_LOG("pGetMetaData:%p, pSetMetaData:%p", m_vendorTagOps.pGetMetaData, m_vendorTagOps.pSetMetaData);
//获取XML文件中Usecase配置信息
pAdvancedUsecase = GetXMLUsecaseByName(ZSL_USECASE_NAME);

if (NULL == pAdvancedUsecase)
{
CHX_LOG_ERROR("Fail to get ZSL usecase from XML!");
result = CDKResultEFailed;
}

ChxUtils::Memset(m_enabledFeatures, 0, sizeof(m_enabledFeatures));
ChxUtils::Memset(m_rejectedSnapshotRequestList, 0, sizeof(m_rejectedSnapshotRequestList));

if (TRUE == IsMultiCameraUsecase())
{
m_isRdiStreamImported = TRUE;
m_isFdStreamImported = TRUE;
}
else
{
m_isRdiStreamImported = FALSE;
m_isFdStreamImported = FALSE;
m_inputOutputType = static_cast<UINT32>(InputOutputType::NO_SPECIAL);
}

for (UINT32 i = 0; i < m_numOfPhysicalDevices; i++)
{
if (FALSE == m_isRdiStreamImported)
{
m_pRdiStream[i] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
}

if (FALSE == m_isFdStreamImported)
{
m_pFdStream[i] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
}

m_pBayer2YuvStream[i] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
m_pJPEGInputStream[i] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));
}

for (UINT32 i = 0; i < MaxPipelines; i++)
{
m_pipelineToSession[i] = InvalidSessionId;
}

m_realtimeSessionId = static_cast<UINT32>(InvalidSessionId);

if (NULL == pStreamConfig)
{
CHX_LOG_ERROR("pStreamConfig is NULL");
result = CDKResultEFailed;
}

if (CDKResultSuccess == result)
{
CHX_LOG_INFO("AdvancedCameraUsecase::Initialize usecaseId:%d num_streams:%d", m_usecaseId, pStreamConfig->num_streams);
CHX_LOG_INFO("CHI Input Stream Configs:");
for (UINT stream = 0; stream < pStreamConfig->num_streams; stream++)
{
CHX_LOG_INFO("\tstream = %p streamType = %d streamFormat = %d streamWidth = %d streamHeight = %d",
pStreamConfig->streams[stream],
pStreamConfig->streams[stream]->stream_type,
pStreamConfig->streams[stream]->format,
pStreamConfig->streams[stream]->width,
pStreamConfig->streams[stream]->height);

if (CAMERA3_STREAM_INPUT == pStreamConfig->streams[stream]->stream_type)
{
CHX_LOG_INFO("Reprocess usecase");
m_isReprocessUsecase = TRUE;
}
}
result = CreateMetadataManager(m_cameraId, false, NULL, true);
}

// Default sensor mode pick hint
m_defaultSensorModePickHint.sensorModeCaps.value = 0;
m_defaultSensorModePickHint.postSensorUpscale = FALSE;
m_defaultSensorModePickHint.sensorModeCaps.u.Normal = TRUE;

if (TRUE == IsQuadCFAUsecase() && (CDKResultSuccess == result))
{
CHIDIMENSION binningSize = { 0 };

// get binning mode sensor output size,
// if more than one binning mode, choose the largest one
for (UINT i = 0; i < pCameraInfo->m_cameraCaps.numSensorModes; i++)
{
CHX_LOG("i:%d, sensor mode:%d, size:%dx%d",
i, pCameraInfo->pSensorModeInfo[i].sensorModeCaps.value,
pCameraInfo->pSensorModeInfo[i].frameDimension.width,
pCameraInfo->pSensorModeInfo[i].frameDimension.height);

if (1 == pCameraInfo->pSensorModeInfo[i].sensorModeCaps.u.Normal)
{
if ((pCameraInfo->pSensorModeInfo[i].frameDimension.width > binningSize.width) ||
(pCameraInfo->pSensorModeInfo[i].frameDimension.height > binningSize.height))
{
binningSize.width = pCameraInfo->pSensorModeInfo[i].frameDimension.width;
binningSize.height = pCameraInfo->pSensorModeInfo[i].frameDimension.height;
}
}
}

CHX_LOG("sensor binning mode size:%dx%d", binningSize.width, binningSize.height);

// For Quad CFA sensor, should use binning mode for preview.
// So set postSensorUpscale flag here to allow sensor pick binning sensor mode.
m_QuadCFASensorInfo.sensorModePickHint.sensorModeCaps.value = 0;
m_QuadCFASensorInfo.sensorModePickHint.postSensorUpscale = TRUE;
m_QuadCFASensorInfo.sensorModePickHint.sensorModeCaps.u.Normal = TRUE;
m_QuadCFASensorInfo.sensorModePickHint.sensorOutputSize.width = binningSize.width;
m_QuadCFASensorInfo.sensorModePickHint.sensorOutputSize.height = binningSize.height;

// For Quad CFA usecase, should use full size mode for snapshot.
m_defaultSensorModePickHint.sensorModeCaps.value = 0;
m_defaultSensorModePickHint.postSensorUpscale = FALSE;
m_defaultSensorModePickHint.sensorModeCaps.u.QuadCFA = TRUE;
}
else if (ExtensionModule::GetInstance()->IsHDMICamera(m_cameraId))
{
m_defaultSensorModePickHint.sensorModeCaps.value = 0;
m_defaultSensorModePickHint.sensorModeCaps.u.HDMI = TRUE;
}


if (CDKResultSuccess == result)
{
//创建Feature
FeatureSetup(pStreamConfig);
//根据流配置和camera_info重新配置usecase
result = SelectUsecaseConfig(pCameraInfo, pStreamConfig);
}

if ((NULL != m_pChiUsecase) && (CDKResultSuccess == result) && (NULL != m_pPipelineToCamera))
{
CHX_LOG_INFO("Usecase %s selected", m_pChiUsecase->pUsecaseName);

m_pCallbacks = static_cast<ChiCallBacks*>(CHX_CALLOC(sizeof(ChiCallBacks) * m_pChiUsecase->numPipelines));

CHX_LOG_INFO("Pipelines need to create in advance usecase:%d", m_pChiUsecase->numPipelines);
for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
{
CHX_LOG_INFO("[%d/%d], pipeline name:%s, pipeline type:%d, session id:%d, camera id:%d",
i,
m_pChiUsecase->numPipelines,
m_pChiUsecase->pPipelineTargetCreateDesc[i].pPipelineName,
GetAdvancedPipelineTypeByPipelineId(i),
(NULL != m_pPipelineToSession) ? m_pPipelineToSession[i] : i,
m_pPipelineToCamera[i]);
}

if (NULL != m_pCallbacks)
{
for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
{
m_pCallbacks[i].ChiNotify = AdvancedCameraUsecase::ProcessMessageCb;
m_pCallbacks[i].ChiProcessCaptureResult = AdvancedCameraUsecase::ProcessResultCb;
m_pCallbacks[i].ChiProcessPartialCaptureResult = AdvancedCameraUsecase::ProcessDriverPartialCaptureResultCb;
}
//调用父类CameraUsecaseBase的initialize方法,进行一些常规初始化工作
result = CameraUsecaseBase::Initialize(m_pCallbacks, pStreamConfig);

for (UINT index = 0; index < m_pChiUsecase->numPipelines; ++index)
{
INT32 pipelineType = GET_PIPELINE_TYPE_BY_ID(m_pipelineStatus[index].pipelineId);
UINT32 rtIndex = GET_FEATURE_INSTANCE_BY_ID(m_pipelineStatus[index].pipelineId);

if (CDKInvalidId == m_metadataClients[index])
{
result = CDKResultEFailed;
break;
}

if ((rtIndex < MaxRealTimePipelines) && (pipelineType < AdvancedPipelineType::PipelineCount))
{
m_pipelineToClient[rtIndex][pipelineType] = m_metadataClients[index];
m_pMetadataManager->SetPipelineId(m_metadataClients[index], m_pipelineStatus[index].pipelineId);
}
}
}

PostUsecaseCreation(pStreamConfig);

UINT32 maxRequiredFrameCnt = GetMaxRequiredFrameCntForOfflineInput(0);
if (TRUE == IsMultiCameraUsecase())
{
//todo: it is better to calculate max required frame count according to pipeline,
// for example,some customer just want to enable MFNR feature for wide sensor,
// some customer just want to enable SWMF feature for tele sensor.
// here suppose both sensor enable same feature simply.
for (UINT i = 0; i < m_numOfPhysicalDevices; i++)
{
maxRequiredFrameCnt = GetMaxRequiredFrameCntForOfflineInput(i);
UpdateValidRDIBufferLength(i, maxRequiredFrameCnt + 1);
UpdateValidFDBufferLength(i, maxRequiredFrameCnt + 1);
CHX_LOG_CONFIG("physicalCameraIndex:%d,validBufferLength:%d",
i, GetValidBufferLength(i));
}

}
else
{
if (m_rdiStreamIndex != InvalidId)
{
UpdateValidRDIBufferLength(m_rdiStreamIndex, maxRequiredFrameCnt + 1);
CHX_LOG_INFO("m_rdiStreamIndex:%d validBufferLength:%d",
m_rdiStreamIndex, GetValidBufferLength(m_rdiStreamIndex));
}
else
{
CHX_LOG_INFO("No RDI stream");
}

if (m_fdStreamIndex != InvalidId)
{
UpdateValidFDBufferLength(m_fdStreamIndex, maxRequiredFrameCnt + 1);
CHX_LOG_INFO("m_fdStreamIndex:%d validBufferLength:%d",
m_fdStreamIndex, GetValidBufferLength(m_fdStreamIndex));
}
else
{
CHX_LOG_INFO("No FD stream");
}
}
}
else
{
result = CDKResultEFailed;
}

ATRACE_END();

return result;
}
4.3.2.4 CameraUsecaseBase::Initialize

[->vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxadvancedcamerausecase.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// CameraUsecaseBase::Initialize
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult CameraUsecaseBase::Initialize(
ChiCallBacks* pCallbacks,
camera3_stream_configuration_t* pStreamConfig)
{
ATRACE_BEGIN("CameraUsecaseBase::Initialize");

CDKResult result = Usecase::Initialize(false);
BOOL bReprocessUsecase = FALSE;

m_lastResultMetadataFrameNum = -1;
m_effectModeValue = ANDROID_CONTROL_EFFECT_MODE_OFF;
m_sceneModeValue = ANDROID_CONTROL_SCENE_MODE_DISABLED;
m_rtSessionIndex = InvalidId;

m_finalPipelineIDForPartialMetaData = InvalidId;

m_deferOfflineThreadCreateDone = FALSE;
m_pDeferOfflineDoneMutex = Mutex::Create();
m_pDeferOfflineDoneCondition = Condition::Create();
m_deferOfflineSessionDone = FALSE;
m_pCallBacks = pCallbacks;
m_GpuNodePresence = FALSE;
m_debugLastResultFrameNumber = static_cast<UINT32>(-1);
m_pEmptyMetaData = ChxUtils::AndroidMetadata::AllocateMetaData(0,0);
m_rdiStreamIndex = InvalidId;
m_fdStreamIndex = InvalidId;
m_isRequestBatchingOn = false;
m_batchRequestStartIndex = UINT32_MAX;
m_batchRequestEndIndex = UINT32_MAX;
m_numPCRsBeforeStreamOn = ExtensionModule::GetInstance()->GetNumPCRsBeforeStreamOn(m_cameraId);

ChxUtils::Memset(&m_sessions[0], 0, sizeof(m_sessions));

// Default to 1-1 mapping of sessions and pipelines
if (0 == m_numSessions)
{
m_numSessions = m_pChiUsecase->numPipelines;
}

CHX_ASSERT(0 != m_numSessions);

if (CDKResultSuccess == result)
{
ChxUtils::Memset(m_pClonedStream, 0, (sizeof(ChiStream*)*MaxChiStreams));
ChxUtils::Memset(m_pFrameworkOutStreams, 0, (sizeof(ChiStream*)*MaxChiStreams));
m_bCloningNeeded = FALSE;
m_numberOfOfflineStreams = 0;

for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
{
if (m_pChiUsecase->pPipelineTargetCreateDesc[i].sourceTarget.numTargets > 0)
{
bReprocessUsecase = TRUE;
break;
}
}

for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
{
if (TRUE == m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.isRealTime)
{
// Cloning of streams needs when source target stream is enabled and
// all the streams are connected in both real time and offline pipelines
// excluding the input stream count
m_bCloningNeeded = bReprocessUsecase && (UsecaseId::PreviewZSL != m_usecaseId) &&
(m_pChiUsecase->pPipelineTargetCreateDesc[i].sinkTarget.numTargets == (m_pChiUsecase->numTargets - 1));
if (TRUE == m_bCloningNeeded)
{
break;
}
}
}
CHX_LOG("m_bCloningNeeded = %d", m_bCloningNeeded);
// here just generate internal buffer index which will be used for feature to related target buffer
GenerateInternalBufferIndex() ;

for (UINT i = 0; i < m_pChiUsecase->numPipelines; i++)
{
// use mapping if available, otherwise default to 1-1 mapping
UINT sessionId = (NULL != m_pPipelineToSession) ? m_pPipelineToSession[i] : i;
UINT pipelineId = m_sessions[sessionId].numPipelines++;

// Assign the ID to pipelineID
m_sessions[sessionId].pipelines[pipelineId].id = i;

CHX_LOG("Creating Pipeline %s at index %u for session %u, session's pipeline %u, camera id:%d",
m_pChiUsecase->pPipelineTargetCreateDesc[i].pPipelineName, i, sessionId, pipelineId, m_pPipelineToCamera[i]);

result = CreatePipeline(m_pPipelineToCamera[i],
&m_pChiUsecase->pPipelineTargetCreateDesc[i],
&m_sessions[sessionId].pipelines[pipelineId],
pStreamConfig);

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Failed to Create Pipeline %s at index %u for session %u, session's pipeline %u, camera id:%d",
m_pChiUsecase->pPipelineTargetCreateDesc[i].pPipelineName, i, sessionId, pipelineId, m_pPipelineToCamera[i]);
break;
}

m_sessions[sessionId].pipelines[pipelineId].isHALInputStream = PipelineHasHALInputStream(&m_pChiUsecase->pPipelineTargetCreateDesc[i]);

if (FALSE == m_GpuNodePresence)
{
for (UINT nodeIndex = 0;
nodeIndex < m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.numNodes; nodeIndex++)
{
UINT32 nodeIndexId =
m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.pNodes->nodeId;
if (255 == nodeIndexId)
{
if (NULL != m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.pNodes->pNodeProperties)
{
const CHAR* gpuNodePropertyValue = "com.qti.node.gpu";
const CHAR* nodePropertyValue = (const CHAR*)
m_pChiUsecase->pPipelineTargetCreateDesc[i].pipelineCreateDesc.pNodes->pNodeProperties->pValue;
if (!strcmp(gpuNodePropertyValue, nodePropertyValue))
{
m_GpuNodePresence = TRUE;
break;
}
}
}
}
}

PipelineCreated(sessionId, pipelineId);

}
if (CDKResultSuccess == result)
{
//create internal buffer
CreateInternalBufferManager();

//If Session's Pipeline has HAL input stream port,
//create it on main thread to return important Stream
//information during configure_stream call.
result = CreateSessionsWithInputHALStream(pCallbacks);
}

if (CDKResultSuccess == result)
{
result = StartDeferThread();
}

if (CDKResultSuccess == result)
{
result = CreateRTSessions(pCallbacks);
}

if (CDKResultSuccess == result)
{
INT32 frameworkBufferCount = BufferQueueDepth;

for (UINT32 sessionIndex = 0; sessionIndex < m_numSessions; ++sessionIndex)
{
PipelineData* pPipelineData = m_sessions[sessionIndex].pipelines;

for (UINT32 pipelineIndex = 0; pipelineIndex < m_sessions[sessionIndex].numPipelines; pipelineIndex++)
{
Pipeline* pPipeline = pPipelineData[pipelineIndex].pPipeline;
if (TRUE == pPipeline->IsRealTime())
{
m_metadataClients[pPipelineData[pipelineIndex].id] =
m_pMetadataManager->RegisterClient(
pPipeline->IsRealTime(),
pPipeline->GetTagList(),
pPipeline->GetTagCount(),
pPipeline->GetPartialTagCount(),
pPipeline->GetMetadataBufferCount() + BufferQueueDepth,
ChiMetadataUsage::RealtimeOutput);

pPipelineData[pipelineIndex].pPipeline->SetMetadataClientId(
m_metadataClients[pPipelineData[pipelineIndex].id]);

// update tag filters
PrepareHFRTagFilterList(pPipelineData[pipelineIndex].pPipeline->GetMetadataClientId());
frameworkBufferCount += pPipeline->GetMetadataBufferCount();
}
ChiMetadata* pMetadata = pPipeline->GetDescriptorMetadata();
result = pMetadata->SetTag("com.qti.chi.logicalcamerainfo", "NumPhysicalCameras", &m_numOfPhysicalDevices,
sizeof(m_numOfPhysicalDevices));
if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Failed to set metadata tag NumPhysicalCameras");
}
}
}

m_pMetadataManager->InitializeFrameworkInputClient(frameworkBufferCount);
}
}

ATRACE_END();
return result;
}

4.3.3 CreatePipeline

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// CameraUsecaseBase::CreatePipeline
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult CameraUsecaseBase::CreatePipeline(
UINT32 cameraId,
ChiPipelineTargetCreateDescriptor* pPipelineDesc,
PipelineData* pPipelineData,
camera3_stream_configuration_t* pStreamConfig)
{
CDKResult result = CDKResultSuccess;

pPipelineData->pPipeline = Pipeline::Create(cameraId, PipelineType::Default, pPipelineDesc->pPipelineName);

if (NULL != pPipelineData->pPipeline)
{
UINT numStreams = 0;
ChiTargetPortDescriptorInfo* pSinkTarget = &pPipelineDesc->sinkTarget;
ChiTargetPortDescriptorInfo* pSrcTarget = &pPipelineDesc->sourceTarget;

ChiPortBufferDescriptor pipelineOutputBuffer[MaxChiStreams];
ChiPortBufferDescriptor pipelineInputBuffer[MaxChiStreams];

ChxUtils::Memset(pipelineOutputBuffer, 0, sizeof(pipelineOutputBuffer));
ChxUtils::Memset(pipelineInputBuffer, 0, sizeof(pipelineInputBuffer));

UINT32 tagId = ExtensionModule::GetInstance()->GetVendorTagId(VendorTag::FastShutterMode);
UINT8 isFSMode = 0;
if (StreamConfigModeFastShutter == ExtensionModule::GetInstance()->GetOpMode(m_cameraId))
{
isFSMode = 1;
}

if (TRUE == pPipelineData->pPipeline->HasSensorNode(&pPipelineDesc->pipelineCreateDesc))
{
ChiMetadata* pMetadata = pPipelineData->pPipeline->GetDescriptorMetadata();
if (NULL != pMetadata)
{
CSIDBinningInfo binningInfo ={ 0 };
CameraCSIDTrigger(&binningInfo, pPipelineDesc);

result = pMetadata->SetTag("org.quic.camera.ifecsidconfig",
"csidbinninginfo",
&binningInfo,
sizeof(binningInfo));
if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Failed to set metadata ifecsidconfig");
result = CDKResultSuccess;
}
}
}

result = pPipelineData->pPipeline->SetVendorTag(tagId, static_cast<VOID*>(&isFSMode), 1);
if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Failed to set metadata FSMode");
result = CDKResultSuccess;
}

if (NULL != pStreamConfig)
{
pPipelineData->pPipeline->SetAndroidMetadata(pStreamConfig);
}

for (UINT sinkIdx = 0; sinkIdx < pSinkTarget->numTargets; sinkIdx++)
{
ChiTargetPortDescriptor* pSinkTargetDesc = &pSinkTarget->pTargetPortDesc[sinkIdx];


UINT previewFPS = ExtensionModule::GetInstance()->GetPreviewFPS();
UINT videoFPS = ExtensionModule::GetInstance()->GetVideoFPS();
UINT pipelineFPS = ExtensionModule::GetInstance()->GetUsecaseMaxFPS();

pSinkTargetDesc->pTarget->pChiStream->streamParams.streamFPS = pipelineFPS;

// override ChiStream FPS value for Preview/Video streams with stream-specific values only IF
// APP has set valid stream-specific fps
if (UsecaseSelector::IsPreviewStream(reinterpret_cast<camera3_stream_t*>(pSinkTargetDesc->pTarget->pChiStream)))
{
pSinkTargetDesc->pTarget->pChiStream->streamParams.streamFPS = (previewFPS == 0) ? pipelineFPS : previewFPS;
}
else if (UsecaseSelector::IsVideoStream(reinterpret_cast<camera3_stream_t*>(pSinkTargetDesc->pTarget->pChiStream)))
{
pSinkTargetDesc->pTarget->pChiStream->streamParams.streamFPS = (videoFPS == 0) ? pipelineFPS : videoFPS;
}

if ((pSrcTarget->numTargets > 0) && (TRUE == m_bCloningNeeded))
{
m_pFrameworkOutStreams[m_numberOfOfflineStreams] = pSinkTargetDesc->pTarget->pChiStream;
m_pClonedStream[m_numberOfOfflineStreams] = static_cast<CHISTREAM*>(CHX_CALLOC(sizeof(CHISTREAM)));

ChxUtils::Memcpy(m_pClonedStream[m_numberOfOfflineStreams], pSinkTargetDesc->pTarget->pChiStream, sizeof(CHISTREAM));

pipelineOutputBuffer[sinkIdx].pStream = m_pClonedStream[m_numberOfOfflineStreams];
pipelineOutputBuffer[sinkIdx].pNodePort = pSinkTargetDesc->pNodePort;
pipelineOutputBuffer[sinkIdx].numNodePorts= pSinkTargetDesc->numNodePorts;
pPipelineData->pStreams[numStreams++] = pipelineOutputBuffer[sinkIdx].pStream;
m_numberOfOfflineStreams++;

CHX_LOG("CloningNeeded sinkIdx %d numStreams %d pStream %p nodePortId %d",
sinkIdx,
numStreams-1,
pipelineOutputBuffer[sinkIdx].pStream,
pipelineOutputBuffer[sinkIdx].pNodePort[0].nodePortId);
}
else
{
pipelineOutputBuffer[sinkIdx].pStream = pSinkTargetDesc->pTarget->pChiStream;
pipelineOutputBuffer[sinkIdx].pNodePort = pSinkTargetDesc->pNodePort;
pipelineOutputBuffer[sinkIdx].numNodePorts = pSinkTargetDesc->numNodePorts;
pPipelineData->pStreams[numStreams++] = pipelineOutputBuffer[sinkIdx].pStream;
CHX_LOG("sinkIdx %d numStreams %d pStream %p format %u %d:%d nodePortID %d",
sinkIdx,
numStreams - 1,
pipelineOutputBuffer[sinkIdx].pStream,
pipelineOutputBuffer[sinkIdx].pStream->format,
pipelineOutputBuffer[sinkIdx].pNodePort[0].nodeId,
pipelineOutputBuffer[sinkIdx].pNodePort[0].nodeInstanceId,
pipelineOutputBuffer[sinkIdx].pNodePort[0].nodePortId);
}
}

for (UINT sourceIdx = 0; sourceIdx < pSrcTarget->numTargets; sourceIdx++)
{
UINT i = 0;
ChiTargetPortDescriptor* pSrcTargetDesc = &pSrcTarget->pTargetPortDesc[sourceIdx];

pipelineInputBuffer[sourceIdx].pStream = pSrcTargetDesc->pTarget->pChiStream;

pipelineInputBuffer[sourceIdx].pNodePort = pSrcTargetDesc->pNodePort;
pipelineInputBuffer[sourceIdx].numNodePorts = pSrcTargetDesc->numNodePorts;

for (i = 0; i < numStreams; i++)
{
if (pPipelineData->pStreams[i] == pipelineInputBuffer[sourceIdx].pStream)
{
break;
}
}
if (numStreams == i)
{
pPipelineData->pStreams[numStreams++] = pipelineInputBuffer[sourceIdx].pStream;
}

for (UINT portIndex = 0; portIndex < pipelineInputBuffer[sourceIdx].numNodePorts; portIndex++)
{
CHX_LOG("sourceIdx %d portIndex %d numStreams %d pStream %p format %u %d:%d nodePortID %d",
sourceIdx,
portIndex,
numStreams - 1,
pipelineInputBuffer[sourceIdx].pStream,
pipelineInputBuffer[sourceIdx].pStream->format,
pipelineInputBuffer[sourceIdx].pNodePort[portIndex].nodeId,
pipelineInputBuffer[sourceIdx].pNodePort[portIndex].nodeInstanceId,
pipelineInputBuffer[sourceIdx].pNodePort[portIndex].nodePortId);
}
}
pPipelineData->pPipeline->SetOutputBuffers(pSinkTarget->numTargets, &pipelineOutputBuffer[0]);
pPipelineData->pPipeline->SetInputBuffers(pSrcTarget->numTargets, &pipelineInputBuffer[0]);
pPipelineData->pPipeline->SetPipelineNodePorts(&pPipelineDesc->pipelineCreateDesc);
pPipelineData->pPipeline->SetPipelineName(pPipelineDesc->pPipelineName);

CHX_LOG("set sensor mode pick hint: %p", GetSensorModePickHint(pPipelineData->id));
pPipelineData->pPipeline->SetSensorModePickHint(GetSensorModePickHint(pPipelineData->id));

pPipelineData->numStreams = numStreams;

result = pPipelineData->pPipeline->CreateDescriptor();
}

return result;
}
4.3.3.1 Pipeline::Create

[->vendor\qcom\proprietary\chi-cdk\core\chiframework\chxpipeline.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Pipeline::Create
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Pipeline* Pipeline::Create(
UINT32 cameraId,
PipelineType type,
const CHAR* pName)
{
Pipeline* pPipeline = CHX_NEW Pipeline;

if (NULL != pPipeline)
{
pPipeline->Initialize(cameraId, type);

pPipeline->m_pPipelineName = pName;
}

return pPipeline;
}
4.3.3.2 Pipeline::Initialize
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
CamxResult Pipeline::Initialize(
PipelineCreateInputData* pPipelineCreateInputData,
PipelineCreateOutputData* pPipelineCreateOutputData)
{
CamxResult result = CamxResultEFailed;

m_pChiContext = pPipelineCreateInputData->pChiContext;
m_flags.isSecureMode = pPipelineCreateInputData->isSecureMode;
m_flags.isHFRMode = pPipelineCreateInputData->pPipelineDescriptor->flags.isHFRMode;
m_flags.isInitialConfigPending = TRUE;
m_pThreadManager = pPipelineCreateInputData->pChiContext->GetThreadManager();
m_pPipelineDescriptor = pPipelineCreateInputData->pPipelineDescriptor;
m_pipelineIndex = pPipelineCreateInputData->pipelineIndex;
m_cameraId = m_pPipelineDescriptor->cameraId;
m_hCSLLinkHandle = CSLInvalidHandle;
m_numConfigDoneNodes = 0;
m_lastRequestId = 0;
m_configDoneCount = 0;
m_hCSLLinkHandle = 0;
m_HALOutputBufferCombined = m_pPipelineDescriptor->HALOutputBufferCombined;
m_lastSubmittedShutterRequestId = 0;
m_pTuningManager = HwEnvironment::GetInstance()->GetTuningDataManager(m_cameraId);
m_sensorSyncMode = NoSync;

// Create lock and condition for config done
m_pConfigDoneLock = Mutex::Create("PipelineConfigDoneLock");
m_pWaitForConfigDone = Condition::Create("PipelineWaitForConfigDone");

// Resource lock, used to syncronize acquire resources and release resources
m_pResourceAcquireReleaseLock = Mutex::Create("PipelineResourceAcquireReleaseLock");
if (NULL == m_pResourceAcquireReleaseLock)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

m_pWaitForStreamOnDone = Condition::Create("PipelineWaitForStreamOnDone");
if (NULL == m_pWaitForStreamOnDone)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

m_pStreamOnDoneLock = Mutex::Create("PipelineStreamOnDoneLock");
if (NULL == m_pStreamOnDoneLock)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

m_pNodesRequestDoneLock = Mutex::Create("PipelineAllNodesRequestDone");
if (NULL == m_pNodesRequestDoneLock)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

m_pWaitAllNodesRequestDone = Condition::Create("PipelineWaitAllNodesRequestDone");
if (NULL == m_pWaitAllNodesRequestDone)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

// Create external Sensor when sensor module is enabled
// External Sensor Module is created so as to test CAMX ability to work with OEMs
// who has external sensor (ie they do all sensor configuration outside of driver
// and there is no sensor node in the pipeline )
HwContext* pHwcontext = pPipelineCreateInputData->pChiContext->GetHwContext();
if (TRUE == pHwcontext->GetStaticSettings()->enableExternalSensorModule)
{
m_pExternalSensor = ExternalSensor::Create();
CAMX_ASSERT(NULL != m_pExternalSensor);
}

CAMX_ASSERT(NULL != m_pConfigDoneLock);
CAMX_ASSERT(NULL != m_pWaitForConfigDone);
CAMX_ASSERT(NULL != m_pResourceAcquireReleaseLock);

OsUtils::SNPrintF(m_pipelineIdentifierString, sizeof(m_pipelineIdentifierString), "%s_%d",
GetPipelineName(), GetPipelineId());

// We can't defer UsecasePool since we are publishing preview dimension to it.
m_pUsecasePool = MetadataPool::Create(PoolType::PerUsecase, m_pipelineIndex, NULL, 1, GetPipelineIdentifierString(), 0);

if (NULL != m_pUsecasePool)
{
m_pUsecasePool->UpdateRequestId(0); // Usecase pool created, mark the slot as valid
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

SetLCRRawformatPorts();

SetNumBatchedFrames(m_pPipelineDescriptor->numBatchedFrames, m_pPipelineDescriptor->maxFPSValue);

m_pCSLSyncIDToRequestId = static_cast<UINT64*>(CAMX_CALLOC(sizeof(UINT64) * MaxPerRequestInfo * GetBatchedHALOutputNum()));

if (NULL == m_pCSLSyncIDToRequestId)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

m_pStreamBufferBlob = static_cast<StreamBufferInfo*>(CAMX_CALLOC(sizeof(StreamBufferInfo) * GetBatchedHALOutputNum() *
MaxPerRequestInfo));
if (NULL == m_pStreamBufferBlob)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}

for (UINT i = 0; i < MaxPerRequestInfo; i++)
{
m_perRequestInfo[i].pSequenceId = static_cast<UINT32*>(CAMX_CALLOC(sizeof(UINT32) * GetBatchedHALOutputNum()));

if (NULL == m_perRequestInfo[i].pSequenceId)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory!!");
return CamxResultENoMemory;
}
m_perRequestInfo[i].request.pStreamBuffers = &m_pStreamBufferBlob[i * GetBatchedHALOutputNum()];
}

MetadataSlot* pMetadataSlot = m_pUsecasePool->GetSlot(0);
MetaBuffer* pInitializationMetaBuffer = m_pPipelineDescriptor->pSessionMetadata;
MetaBuffer* pMetadataSlotDstBuffer = NULL;

// Copy metadata published by the Chi Usecase to this pipeline's UsecasePool
if (NULL != pInitializationMetaBuffer)
{
result = pMetadataSlot->GetMetabuffer(&pMetadataSlotDstBuffer);

if (CamxResultSuccess == result)
{
pMetadataSlotDstBuffer->Copy(pInitializationMetaBuffer, TRUE);
}
else
{
CAMX_LOG_ERROR(CamxLogGroupMeta, "Cannot copy! Error Code: %u", result);
}
}
else
{
CAMX_LOG_WARN(CamxLogGroupMeta, "No init metadata found!");
}

if (CamxResultSuccess == result)
{
UINT32 metaTag = 0;
UINT sleepStaticSetting = HwEnvironment::GetInstance()->GetStaticSettings()->induceSleepInChiNode;

result = VendorTagManager::QueryVendorTagLocation(
"org.quic.camera.induceSleepInChiNode",
"InduceSleep",
&metaTag);

if (CamxResultSuccess == result)
{
result = pMetadataSlot->SetMetadataByTag(metaTag, &sleepStaticSetting, 1, "camx_session");

if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Failed to set Induce sleep result %d", result);
}
}
}

GetCameraRunningOnBPS(pMetadataSlot);

ConfigureMaxPipelineDelay(m_pPipelineDescriptor->maxFPSValue,
(FALSE == m_flags.isCameraRunningOnBPS) ? DefaultMaxIFEPipelineDelay : DefaultMaxBPSPipelineDelay);

QueryEISCaps();

PublishOutputDimensions();
PublishTargetFPS();

if (CamxResultSuccess == result)
{
result = PopulatePSMetadataSet();
}

result = CreateNodes(pPipelineCreateInputData, pPipelineCreateOutputData);

// set frame delay in session metadata
if (CamxResultSuccess == result)
{
UINT32 metaTag = 0;
UINT32 frameDelay = DetermineFrameDelay();
result = VendorTagManager::QueryVendorTagLocation(
"org.quic.camera.eislookahead", "FrameDelay", &metaTag);
if (CamxResultSuccess == result)
{
MetaBuffer* pSessionMetaBuffer = m_pPipelineDescriptor->pSessionMetadata;
if (NULL != pSessionMetaBuffer)
{
result = pSessionMetaBuffer->SetTag(metaTag, &frameDelay, 1, sizeof(UINT32));
}
else
{
result = CamxResultEInvalidPointer;
CAMX_LOG_ERROR(CamxLogGroupCore, "Session metadata pointer null");
}
}
}

// set EIS enabled flag in session metadata
if (CamxResultSuccess == result)
{
UINT32 metaTag = 0;
BOOL bEnabled = IsEISEnabled();
result = VendorTagManager::QueryVendorTagLocation("org.quic.camera.eisrealtime", "Enabled", &metaTag);

// write the enabled flag only if it's set to TRUE. IsEISEnabled may return FALSE when vendor tag is not published too
if ((TRUE == bEnabled) && (CamxResultSuccess == result))
{
MetaBuffer* pSessionMetaBuffer = m_pPipelineDescriptor->pSessionMetadata;
if (NULL != pSessionMetaBuffer)
{
result = pSessionMetaBuffer->SetTag(metaTag, &bEnabled, 1, sizeof(BYTE));
}
else
{
result = CamxResultEInvalidPointer;
CAMX_LOG_ERROR(CamxLogGroupCore, "Session metadata pointer null");
}
}
}

// set EIS minimal total margin in session metadata
if (CamxResultSuccess == result)
{
UINT32 metaTag = 0;
MarginRequest margin = { 0 };

result = DetermineEISMiniamalTotalMargin(&margin);

if (CamxResultSuccess == result)
{
result = VendorTagManager::QueryVendorTagLocation("org.quic.camera.eisrealtime", "MinimalTotalMargins", &metaTag);
}

if (CamxResultSuccess == result)
{
MetaBuffer* pSessionMetaBuffer = m_pPipelineDescriptor->pSessionMetadata;
if (NULL != pSessionMetaBuffer)
{
result = pSessionMetaBuffer->SetTag(metaTag, &margin, 1, sizeof(MarginRequest));
}
else
{
result = CamxResultEInvalidPointer;
CAMX_LOG_ERROR(CamxLogGroupCore, "Session metadata pointer null");
}
}
}

if (CamxResultSuccess == result)
{
for (UINT i = 0; i < m_nodeCount; i++)
{
result = FilterAndUpdatePublishSet(m_ppNodes[i]);
}
}

if (HwEnvironment::GetInstance()->GetStaticSettings()->numMetadataResults > SingleMetadataResult)
{
m_bPartialMetadataEnabled = TRUE;
}

if ((TRUE == m_bPartialMetadataEnabled) && (TRUE == m_flags.isHFRMode))
{
CAMX_LOG_CONFIG(CamxLogGroupCore, "Disable partial metadata in HFR mode");
m_bPartialMetadataEnabled = FALSE;
}

if (CamxResultSuccess == result)
{
m_pPerRequestInfoLock = Mutex::Create("PipelineRequestInfo");
if (NULL != m_pPerRequestInfoLock)
{
if (IsRealTime())
{
m_metaBufferDelay = Utils::MaxUINT32(
GetMaxPipelineDelay(),
DetermineFrameDelay());
}
else
{
m_metaBufferDelay = 0;
}
}
else
{
result = CamxResultENoMemory;
}
}

if (CamxResultSuccess == result)
{
if (IsRealTime())
{
m_metaBufferDelay = Utils::MaxUINT32(
GetMaxPipelineDelay(),
DetermineFrameDelay());
}
else
{
m_metaBufferDelay = 0;
}

UpdatePublishTags();
}

if (CamxResultSuccess == result)
{
pPipelineCreateOutputData->pPipeline = this;
SetPipelineStatus(PipelineStatus::INITIALIZED);
auto& rPipelineName = m_pipelineIdentifierString;
UINT pipelineId = GetPipelineId();
BOOL isRealtime = IsRealTime();
auto hPipeline = m_pPipelineDescriptor;
BINARY_LOG(LogEvent::Pipeline_Initialize, rPipelineName, pipelineId, hPipeline);
}

return result;
}
4.3.3.3 CreateNodes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// Pipeline::CreateNodes
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CamxResult Pipeline::CreateNodes(
PipelineCreateInputData* pCreateInputData,
PipelineCreateOutputData* pCreateOutputData)
{
/// @todo (CAMX-423) Break it into smaller functions

CAMX_UNREFERENCED_PARAM(pCreateOutputData);

CamxResult result = CamxResultSuccess;
const PipelineDescriptor* pPipelineDescriptor = pCreateInputData->pPipelineDescriptor;
const PerPipelineInfo* pPipelineInfo = &pPipelineDescriptor->pipelineInfo;
UINT numInPlaceSinkBufferNodes = 0;
Node* pInplaceSinkBufferNode[MaxNodeType];
UINT numBypassableNodes = 0;
Node* pBypassableNodes[MaxNodeType];
ExternalComponentInfo* pExternalComponentInfo = HwEnvironment::GetInstance()->GetExternalComponent();
UINT numExternalComponents = HwEnvironment::GetInstance()->GetNumExternalComponent();

CAMX_ASSERT(NULL == m_ppNodes);
m_onlineIFENodeCount = 0;

m_nodeCount = pPipelineInfo->numNodes;
m_ppNodes = static_cast<Node**>(CAMX_CALLOC(sizeof(Node*) * m_nodeCount));
m_ppOrderedNodes = static_cast<Node**>(CAMX_CALLOC(sizeof(Node*) * m_nodeCount));

CAMX_ASSERT(NULL != m_ppOrderedNodes);

if ((NULL != m_ppNodes) &&
(NULL != m_ppOrderedNodes))
{
NodeCreateInputData createInputData = { 0 };

createInputData.pPipeline = this;
createInputData.pChiContext = pCreateInputData->pChiContext;

UINT nodeIndex = 0;

CAMX_LOG_CONFIG(CamxLogGroupCore,
"Topology: Creating Pipeline %s, numNodes %d isSensorInput %d isRealTime %d",
GetPipelineIdentifierString(),
m_nodeCount,
IsSensorInput(),
IsRealTime());

for (UINT numNodes = 0; numNodes < m_nodeCount; numNodes++)
{
NodeCreateOutputData createOutputData = { 0 };
createInputData.pNodeInfo = &(pPipelineInfo->pNodeInfo[numNodes]);
createInputData.pipelineNodeIndex = numNodes;

for (UINT propertyIndex = 0; propertyIndex < createInputData.pNodeInfo->nodePropertyCount; propertyIndex++)
{
for (UINT index = 0; index < numExternalComponents; index++)
{
if ((pExternalComponentInfo[index].nodeAlgoType == ExternalComponentNodeAlgo::COMPONENTALGORITHM) &&
(NodePropertyCustomLib == createInputData.pNodeInfo->pNodeProperties[propertyIndex].id))
{
CHAR matchString[FILENAME_MAX] = {0};
OsUtils::SNPrintF(matchString, FILENAME_MAX, "%s.%s",
static_cast<CHAR*>(createInputData.pNodeInfo->pNodeProperties[propertyIndex].pValue),
SharedLibraryExtension);

if (OsUtils::StrNICmp(pExternalComponentInfo[index].pComponentName,
matchString,
OsUtils::StrLen(pExternalComponentInfo[index].pComponentName)) == 0)
{
if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAF)
{
createInputData.pAFAlgoCallbacks = &pExternalComponentInfo[index].AFAlgoCallbacks;
}
else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAEC)
{
createInputData.pAECAlgoCallbacks = &pExternalComponentInfo[index].AECAlgoCallbacks;
}
else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAWB)
{
createInputData.pAWBAlgoCallbacks = &pExternalComponentInfo[index].AWBAlgoCallbacks;
}
else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOAFD)
{
createInputData.pAFDAlgoCallbacks = &pExternalComponentInfo[index].AFDAlgoCallbacks;
}
else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOASD)
{
createInputData.pASDAlgoCallbacks = &pExternalComponentInfo[index].ASDAlgoCallbacks;
}
else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOPD)
{
createInputData.pPDLibCallbacks = &pExternalComponentInfo[index].PDLibCallbacks;
}
else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOHIST)
{
createInputData.pHistAlgoCallbacks = &pExternalComponentInfo[index].histAlgoCallbacks;
}
else if (pExternalComponentInfo[index].statsAlgo == ExternalComponentStatsAlgo::ALGOTRACK)
{
createInputData.pTrackerAlgoCallbacks = &pExternalComponentInfo[index].trackerAlgoCallbacks;
}
}
}
else if ((pExternalComponentInfo[index].nodeAlgoType == ExternalComponentNodeAlgo::COMPONENTHVX) &&
(NodePropertyCustomLib == createInputData.pNodeInfo->pNodeProperties[propertyIndex].id) &&
(OsUtils::StrStr(pExternalComponentInfo[index].pComponentName,
static_cast<CHAR*>(createInputData.pNodeInfo->pNodeProperties[propertyIndex].pValue)) != NULL))
{
createInputData.pHVXAlgoCallbacks = &pExternalComponentInfo[index].HVXAlgoCallbacks;
}
}
}

result = Node::Create(&createInputData, &createOutputData);

if (CamxResultSuccess == result)
{
CAMX_LOG_CONFIG(CamxLogGroupCore,
"Topology::%s Node::%s Type %d numInputPorts %d numOutputPorts %d",
GetPipelineIdentifierString(),
createOutputData.pNode->NodeIdentifierString(),
createOutputData.pNode->Type(),
createInputData.pNodeInfo->inputPorts.numPorts,
createInputData.pNodeInfo->outputPorts.numPorts);

if (CamxResultSuccess != result)
{
CAMX_LOG_WARN(CamxLogGroupCore, "[%s] Cannot get publish list for %s",
GetPipelineIdentifierString(), createOutputData.pNode->NodeIdentifierString());
}

if (StatsProcessing == createOutputData.pNode->Type())
{
m_flags.hasStatsNode = TRUE;
}

if (IFENodeID == createOutputData.pNode->Type())
{
// SOF will come from all online IFE present in pipeline
// one pipeline can have multiple IFE in case of
// virtual channel support. Need to consolidate SOFs.
m_onlineIFENodeCount++;
}
if (0x10000 == createOutputData.pNode->Type())
{
m_flags.hasIFENode = TRUE;
}

if ((JPEGAggregator == createOutputData.pNode->Type()) || (0x10001 == createOutputData.pNode->Type()))
{
m_flags.hasJPEGNode = TRUE;
}

m_ppNodes[nodeIndex] = createOutputData.pNode;

if ((TRUE == createOutputData.createFlags.isSinkBuffer) ||
(TRUE == createOutputData.createFlags.isSinkNoBuffer))
{
m_nodesSinkOutPorts.nodeIndices[m_nodesSinkOutPorts.numNodes] = nodeIndex;
m_nodesSinkOutPorts.numNodes++;
}

if ((TRUE == createOutputData.createFlags.isSinkBuffer) && (TRUE == createOutputData.createFlags.isInPlace))
{
pInplaceSinkBufferNode[numInPlaceSinkBufferNodes] = createOutputData.pNode;
numInPlaceSinkBufferNodes++;
}

if (TRUE == createOutputData.createFlags.isBypassable)
{
pBypassableNodes[numBypassableNodes] = createOutputData.pNode;
numBypassableNodes++;
}

if ((TRUE == createOutputData.createFlags.isSourceBuffer) || (Sensor == m_ppNodes[nodeIndex]->Type()))
{
m_nodesSourceInPorts.nodeIndices[m_nodesSourceInPorts.numNodes] = nodeIndex;
m_nodesSourceInPorts.numNodes++;
}

if (TRUE == createOutputData.createFlags.willNotifyConfigDone)
{
m_numConfigDoneNodes++;
}

if (TRUE == createOutputData.createFlags.hasDelayedNotification)
{
m_isDelayedPipeline = TRUE;
}

nodeIndex++;
}
else
{
break;
}
}

if (CamxResultSuccess == result)
{
// Set the input link of the nodes - basically connects output port of one node to input port of another
for (UINT nodeIndexInner = 0; nodeIndexInner < m_nodeCount; nodeIndexInner++)
{
const PerNodeInfo* pXMLNode = &pPipelineInfo->pNodeInfo[nodeIndexInner];

for (UINT inputPortIndex = 0; inputPortIndex < pXMLNode->inputPorts.numPorts; inputPortIndex++)
{
const InputPortInfo* pInputPortInfo = &pXMLNode->inputPorts.pPortInfo[inputPortIndex];

if (FALSE == m_ppNodes[nodeIndexInner]->IsSourceBufferInputPort(inputPortIndex))
{
m_ppNodes[nodeIndexInner]->SetInputLink(inputPortIndex,
pInputPortInfo->portId,
m_ppNodes[pInputPortInfo->parentNodeIndex],
pInputPortInfo->parentOutputPortId);

m_ppNodes[nodeIndexInner]->SetUpLoopBackPorts(inputPortIndex);

/// In the parent node's output port, Save this node as one of the output node connected to it.
m_ppNodes[pInputPortInfo->parentNodeIndex]->AddOutputNodes(pInputPortInfo->parentOutputPortId,
m_ppNodes[nodeIndexInner]);

/// Update access device index list for the source port based on current nodes device index list
/// At this point the source node which maintains the output buffer manager have the access information
/// required for buffer manager creation.
m_ppNodes[pInputPortInfo->parentNodeIndex]->AddOutputDeviceIndices(
pInputPortInfo->parentOutputPortId,
m_ppNodes[nodeIndexInner]->DeviceIndices(),
m_ppNodes[nodeIndexInner]->DeviceIndexCount());

const ImageFormat* pImageFormat = m_ppNodes[nodeIndexInner]->GetInputPortImageFormat(inputPortIndex);
if (NULL != pImageFormat)
{
CAMX_LOG_CONFIG(CamxLogGroupCore,
"Topology: Pipeline[%s] "
"Link: Node::%s(outPort %d) --> (inPort %d) Node::%s using format %d",
GetPipelineIdentifierString(),
m_ppNodes[pInputPortInfo->parentNodeIndex]->NodeIdentifierString(),
pInputPortInfo->parentOutputPortId,
pInputPortInfo->portId,
m_ppNodes[nodeIndexInner]->NodeIdentifierString(),
pImageFormat->format);
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Node::%s Invalid pImageFormat",
m_ppNodes[nodeIndexInner]->NodeIdentifierString());
}
}
else
{
m_ppNodes[nodeIndexInner]->SetupSourcePort(inputPortIndex, pInputPortInfo->portId);
}
}
if (TRUE == m_ppNodes[nodeIndexInner]->IsLoopBackNode())
{
m_ppNodes[nodeIndexInner]->EnableParentOutputPorts();
}
}
}

/// @todo (CAMX-1015) Look into non recursive implementation
if (CamxResultSuccess == result)
{
for (UINT index = 0; index < m_nodesSinkOutPorts.numNodes; index++)
{
if (NULL != m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]])
{
m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]]->TriggerOutputPortStreamIdSetup();
}
}
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "m_ppNodes or m_ppOrderedNodes is Null");
result = CamxResultENoMemory;
}

// Bypass node processing
if (CamxResultSuccess == result)
{
for (UINT index = 0; index < numBypassableNodes; index++)
{
pBypassableNodes[index]->BypassNodeProcessing();
}
}

if (CamxResultSuccess == result)
{
for (UINT index = 0; index < numInPlaceSinkBufferNodes; index++)
{
pInplaceSinkBufferNode[index]->TriggerInplaceProcessing();
}
}

if (CamxResultSuccess == result)
{
for (UINT index = 0; index < m_nodesSinkOutPorts.numNodes; index++)
{
CAMX_ASSERT((NULL != m_ppNodes) && (NULL != m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]]));

Node* pNode = m_ppNodes[m_nodesSinkOutPorts.nodeIndices[index]];

result = pNode->TriggerBufferNegotiation();

if (CamxResultSuccess != result)
{
CAMX_LOG_WARN(CamxLogGroupCore, "Unable to satisfy node input buffer requirements, retrying with NV12");
break;
}
}
if (CamxResultSuccess != result)
{
result = RenegotiateInputBufferRequirement(pCreateInputData, pCreateOutputData);
}
}

if (CamxResultSuccess != result)
{
CAMX_ASSERT_ALWAYS();
CAMX_LOG_ERROR(CamxLogGroupCore, "%s Creating Nodes Failed. Going to Destroy sequence", GetPipelineIdentifierString());
DestroyNodes();
}
else
{
UINT numInputs = 0;

for (UINT index = 0; index < m_nodesSourceInPorts.numNodes; index++)
{
Node* pNode = m_ppNodes[m_nodesSourceInPorts.nodeIndices[index]];
ChiPipelineInputOptions* pInputOptions = &pCreateOutputData->pPipelineInputOptions[numInputs];

numInputs += pNode->FillPipelineInputOptions(pInputOptions);
}

pCreateOutputData->numInputs = numInputs;
}

return result;
}

4.3.4 CreateSessions

4.3.4.1 CreateSessionsWithInputHALStream
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// CameraUsecaseBase::CreateSessionsWithInputHALStream
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult CameraUsecaseBase::CreateSessionsWithInputHALStream(
ChiCallBacks* pCallbacks)
{
CDKResult result = CDKResultSuccess;
BOOL inputHALStream = FALSE;
UINT numSessionsMinusOne = (m_numSessions >= 1) ? (m_numSessions - 1) : 0;

for (INT sessionId = numSessionsMinusOne; sessionId >= 0; sessionId--)
{
Pipeline* pPipelines[MaxPipelinesPerSession] = { 0 };

// Accumulate the pipeline pointers to an array to pass the session creation
for (UINT pipelineId = 0; pipelineId < m_sessions[sessionId].numPipelines; pipelineId++)
{
if (TRUE == m_sessions[sessionId].pipelines[pipelineId].isHALInputStream)
{
inputHALStream = TRUE;
}

pPipelines[pipelineId] = m_sessions[sessionId].pipelines[pipelineId].pPipeline;
}

if (TRUE == inputHALStream)
{
result = CreateSession(sessionId,
pPipelines,
pCallbacks);
}
}

if (result != CDKResultSuccess)
{
for (INT sessionId = m_numSessions - 1; sessionId >= 0; sessionId--)
{
if (NULL != m_sessions[sessionId].pSession)
{
m_sessions[sessionId].pSession->Destroy(TRUE);
m_sessions[sessionId].pSession = NULL;
}
}
}

return result;
}
4.3.4.2 CreateSession
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// CameraUsecaseBase::CreateSession
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult CameraUsecaseBase::CreateSession(
INT sessionId,
Pipeline** ppPipelines,
ChiCallBacks* pCallbacks)
{
CDKResult result = CDKResultSuccess;

CHX_LOG("Creating session %d ", sessionId);

m_perSessionPvtData[sessionId].sessionId = sessionId;
m_perSessionPvtData[sessionId].pUsecase = this;

if (NULL == m_sessions[sessionId].pSession)
{
m_sessions[sessionId].pSession = Session::Create(ppPipelines,
m_sessions[sessionId].numPipelines,
&pCallbacks[sessionId],
&m_perSessionPvtData[sessionId]);
}

if (NULL == m_sessions[sessionId].pSession)
{
CHX_LOG_ERROR("Failed to create offline session, sessionId: %d", sessionId);
result = CDKResultEFailed;
}
else
{
CHX_LOG("success Creating Session %d", sessionId);
}

return result;
}

4.3.5 小结

配置数据流是整个CamX-CHI流程比较重要的一环,其中主要包括两个阶段:

  1. 选择UsecaseId
  2. 根据选择的UsecaseId创建Usecase

① 选择UsecaseId

不同的UsecaseId分别对应的不同的应用场景,该阶段是通过调用UsecaseSelector::GetMatchingUsecase()方法来实现的,该函数中通过传入的operation_mode、num_streams配置数据流数量以及当前使用的Sensor个数来选择相应的UsecaseId,比如当numPhysicalCameras值大于1同时配置的数据流数量num_streams大于1时选择的就是UsecaseId::MultiCamera,表示当前采用的是双摄场景。

② 创建Usecase

根据之前选择的UsecaseId,通过UsecaseFactory来创建相应的Usecase,

其中Class Usecase是所有Usecase的基类,其中定义并实现了一些通用接口,CameraUsecaseBase继承于Usecase,并扩展了部分功能。AdvancedCameraUsecase又继承于CameraUsecaseBase,作为主要负责大部分场景的Usecase实现类,另外对于多摄场景,现提供了继承于AdvancedCameraUsecase的UsecaseMultiCamera来负责实现。

除了双摄场景,其它大部分场景使用的都是AdvancedCameraUsecase类来管理各项资源的。

在AdvancedCameraUsecase::Create方法中做了很多初始化操作,其中包括了以下几个阶段:

  1. 获取XML文件中Usecase配置信息
  2. 创建Feature
  3. 保存数据流,重建Usecase的配置信息
  4. 调用父类CameraUsecaseBase的initialize方法,进行一些常规初始化工作

接下来我们就这几个阶段逐一进行分析:

1.获取XML文件中Usecase配置信息

这一部分主要通过调用CameraUsecaseBase::GetXMLUsecaseByName方法进行实现。

该方法的主要操作是从PerNumTargetUsecases数组中找到匹配到给定的usecaseName的Usecase,并作为返回值返回给调用者,其中这里我们以”UsecaseZSL“为例进行分析,PerNumTargetUsecases的定义是在g_pipeline.h中,该文件是在编译过程中通过usecaseconverter.pl脚本将定义在个平台目录下的common_usecase.xml中的内容转换生成g_pipeline.h。

2.创建Feature

如果当前场景选取了Feature,则调用FeatureSetup来完成创建工作。

该方法主要是通过诸如operation_mode、camera数量以及UsecaseId等信息来决定需要选择哪些Feature,具体逻辑比较清晰,一旦决定需要使用哪一个Feature之后,便调用相应的Feature的Create()方法进行初始化操作。

3.保存数据流,重建Usecase的配置信息

从Camera Service 传入的数据流,需要将其存储下来,供后续使用,同时高通针对Usecase也加入了Override机制,根据需要可以选择性地扩展Usecase,这两个步骤的实现主要是通过SelectUsecaseConfig方法来实现。

其中主要是调用以下两个方法来实现的:

  • ConfigureStream: 该方法将从上层配置的数据流指针存入AdvancedCameraUsecase中,其中包括了用于预览的m_pPreviewStream以及用于拍照的m_pSnapshotStream。
  • BuildUsecase: 这个方法用来重新在原有的Usecase上面加入了Feature中所需要的pipeline,并创建了一个新的Usecase,并将其存入AdvancedCameraUsecase中的m_pChiUsecase成员变量中,紧接着通过SetPipelineToSessionMapping方法将pipeline与Session进行关联。

4.调用父类CameraUsecaseBase的initialize方法,进行一些常规初始化工作

该方法中的操作主要有以下三个:

  • 设置Session回调
  • 创建Pipeline
  • 创建Session

设置Session回调

该方法有两个参数,第二个是缺省的,第一个是ChiCallBacks,该参数是作为创建的每一条Session的回调方法,当Session中的pipeline全部跑完之后,会回调该方法将数据投递到CHI中。

创建Pipeline

根据之前获取的pipeline信息开始创建每一条pipeline,通过调用CreatePipeline()方法实现。

创建Session

创建Session,通过CreateSession()方法实现,此时会将AdvancedCameraUsecase端的回调函数注册到Session中,一旦Session中数据处理完成,便会调用回调将数据回传给AdvancedCameraUsecase。

综上,整个configure_stream过程,基本可以概括为以下几点:

  1. 根据operation_mode、camera 个数以及stream的配置信息选取了对应的UsecaseId
  2. 根据所选取的UsecaseId,使用UsecaseFactory简单工厂类创建了用于管理整个场景下所有资源的AdvancedCameraUsecase对象。
  3. 创建AdvancedCameraUsecase对象是通过调用其Create()方法完成,该方法中获取了common_usecase.xml定义的关于Usecase的配置信息,之后又根据需要创建了Feature并选取了Feature所需的pipeline,并通过Override机制将Feature中所需要的Pipeline加入重建后的Usecase中。
  4. 最后通过调用CameraUsecaseBaese的initialize方法依次创建了各个pipeline以及Session,并且将AdvancedCameraUsecase的成员方法注册到Session,用于Session将数据返回给Usecase中

4.4 处理拍照请求

当用户打开相机应用进行预览或者点击一次拍照操作的时候,便触发了一次拍照请求,该动作首先通过CameraDeviceSession的capture或者setRepeatingRequest方法将请求通过Camera api v2接口下发到Camera Service中,然后在Camera Service内部将此次请求发送到CameraDevice::RequestThread线程中进行处理,一旦进入到该线程之后,便会最终通过HIDL接口ICameraCaptureSession:processCaptureRequest_3_4将请求发送至Provider中,之后当Provider收到请求之后,会调用camera3_device_t结构体的process_capture_request开始了HAL针对此次Request的处理,而该处理是由CamX-CHI来负责实现,现在我们就来看下CamX-CHI是如何实现该方法的:

4.4.1 process_capture_request

4.4.1.1 camxhal3entry->process_capture_request

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3entry.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// process_capture_request
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
int process_capture_request(
const struct camera3_device* pCamera3DeviceAPI,
camera3_capture_request_t* pCaptureRequestAPI)
{
JumpTableHAL3* pHAL3 = static_cast<JumpTableHAL3*>(g_dispatchHAL3.GetJumpTable());

CAMX_ASSERT(pHAL3);
CAMX_ASSERT(pHAL3->process_capture_request);

return pHAL3->process_capture_request(pCamera3DeviceAPI, pCaptureRequestAPI);
}
4.4.1.2 camxhal3->configure_streams

[->vendor\qcom\proprietary\camx\src\core\hal\camxhal3.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// process_capture_request
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
static int process_capture_request(
const struct camera3_device* pCamera3DeviceAPI,
camera3_capture_request_t* pCaptureRequestAPI)
{
UINT64 frameworkFrameNum = 0;

if (NULL != pCaptureRequestAPI)
{
frameworkFrameNum = pCaptureRequestAPI->frame_number;
}
const StaticSettings* pSettings = HwEnvironment::GetInstance()->GetStaticSettings();
HALDevice* pHALDevice = GetHALDevice(pCamera3DeviceAPI);
CAMX_LOG_ERROR(CamxLogGroupHAL," enable Live Tuning %d", pSettings->enableLiveTuning);
INT32 cameraID = pHALDevice->GetCameraId();
if ((pSettings->enableLiveTuning==1) && (frameworkFrameNum % 5 ==0))
{
HwEnvironment::GetInstance()->GetSensorInfo(cameraID);
}

CAMX_ENTRYEXIT_SCOPE_ID(CamxLogGroupHAL, SCOPEEventHAL3ProcessCaptureRequest, frameworkFrameNum);

CAMX_TRACE_ASYNC_BEGIN_F(CamxLogGroupHAL, frameworkFrameNum, "HAL3: RequestTrace");

CamxResult result = CamxResultSuccess;

CAMX_ASSERT(NULL != pCamera3DeviceAPI);
CAMX_ASSERT(NULL != pCamera3DeviceAPI->priv);
CAMX_ASSERT(NULL != pCaptureRequestAPI);
CAMX_ASSERT(pCaptureRequestAPI->num_output_buffers > 0);
CAMX_ASSERT(NULL != pCaptureRequestAPI->output_buffers);

if ((NULL != pCamera3DeviceAPI) &&
(NULL != pCamera3DeviceAPI->priv) &&
(NULL != pCaptureRequestAPI) &&
(pCaptureRequestAPI->num_output_buffers > 0) &&
(NULL != pCaptureRequestAPI->output_buffers))
{
/// @todo (CAMX-337): Go deeper into camera3_capture_request_t struct for validation

HALDevice* pHALDevice = GetHALDevice(pCamera3DeviceAPI);
Camera3CaptureRequest* pRequest = reinterpret_cast<Camera3CaptureRequest*>(pCaptureRequestAPI);
camera3_capture_request_t& rCaptureRequest = *pCaptureRequestAPI;
BINARY_LOG(LogEvent::HAL3_ProcessCaptureRequest, rCaptureRequest);

CAMX_LOG_CONFIG(CamxLogGroupHAL, "frame_number %d, settings %p, num_output_buffers %d",
pCaptureRequestAPI->frame_number,
pCaptureRequestAPI->settings,
pCaptureRequestAPI->num_output_buffers);

uint32_t frame_number = rCaptureRequest.frame_number;
BOOL isCaptureBuffer = FALSE;
BOOL isReprocessBuffer = FALSE;
if (NULL != pCaptureRequestAPI->output_buffers)
{
for (UINT i = 0; i < pCaptureRequestAPI->num_output_buffers; i++)
{
const camera3_stream_buffer_t& rBuffer = pCaptureRequestAPI->output_buffers[i];
isCaptureBuffer = TRUE;
isReprocessBuffer = FALSE;
BINARY_LOG(LogEvent::HAL3_BufferInfo, frame_number, rBuffer, isCaptureBuffer, isReprocessBuffer);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " output_buffers[%d] : %p, buffer: %p, status: %08x, stream: %p",
i,
&pCaptureRequestAPI->output_buffers[i],
pCaptureRequestAPI->output_buffers[i].buffer,
pCaptureRequestAPI->output_buffers[i].status,
pCaptureRequestAPI->output_buffers[i].stream);
if (HAL_PIXEL_FORMAT_BLOB == pCaptureRequestAPI->output_buffers[i].stream->format)
{
CAMX_TRACE_ASYNC_BEGIN_F(CamxLogGroupHAL, pCaptureRequestAPI->frame_number, "SNAPSHOT frameID: %d",
pCaptureRequestAPI->frame_number);
CAMX_TRACE_ASYNC_BEGIN_F(CamxLogGroupHAL, pCaptureRequestAPI->frame_number, "SHUTTERLAG frameID: %d",
pCaptureRequestAPI->frame_number);
}
}
}
if (NULL != pCaptureRequestAPI->input_buffer)
{

const camera3_stream_buffer_t& rBuffer = *pCaptureRequestAPI->input_buffer;
isCaptureBuffer = FALSE;
isReprocessBuffer = TRUE;
BINARY_LOG(LogEvent::HAL3_BufferInfo, frame_number, rBuffer, isCaptureBuffer, isReprocessBuffer);
CAMX_LOG_CONFIG(CamxLogGroupHAL, " input_buffer %p, buffer: %p, status: %08x, stream: %p",
pCaptureRequestAPI->input_buffer,
pCaptureRequestAPI->input_buffer->buffer,
pCaptureRequestAPI->input_buffer->status,
pCaptureRequestAPI->input_buffer->stream);
}


if (CAMX_IS_TRACE_ENABLED(CamxLogGroupCore))
{
pHALDevice->TraceZoom(pCaptureRequestAPI);
}

result = pHALDevice->ProcessCaptureRequest(pRequest);

if ((CamxResultSuccess != result) && (CamxResultEInvalidArg != result))
{
// HAL interface requires -ENODEV (EFailed) if a fatal error occurs
result = CamxResultEFailed;
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "Invalid argument(s) for process_capture_request()");
// HAL interface requires -EINVAL (EInvalidArg) for invalid arguments
result = CamxResultEInvalidArg;
}

return Utils::CamxResultToErrno(result);
}
4.4.1.3 HALDevice::ProcessCaptureRequest

[->vendor\qcom\proprietary\camx\src\core\hal\camxhaldevice.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// HALDevice::ProcessCaptureRequest
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CamxResult HALDevice::ProcessCaptureRequest(
Camera3CaptureRequest* pRequest)
{
CamxResult result = CamxResultEFailed;

if (TRUE == IsCHIModuleInitialized())
{
// Keep track of information related to request for error conditions
PopulateFrameworkRequestBuffer(pRequest);

CAMX_LOG_INFO(CamxLogGroupHAL,
"CHIModule: Original framework framenumber %d contains %d output buffers",
pRequest->frameworkFrameNum,
pRequest->numOutputBuffers);

result = GetCHIAppCallbacks()->chi_override_process_request(reinterpret_cast<const camera3_device*>(&m_camera3Device),
reinterpret_cast<camera3_capture_request_t*>(pRequest),
NULL);
if (CamxResultSuccess != result)
{
// Remove the request from the framework data list if the request fails
RemoveFrameworkRequestBuffer(pRequest);
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupHAL, "CHIModule disabled, rejecting HAL request");
}

CAMX_ASSERT(CamxResultSuccess == result);

return result;
}
4.4.1.4 chi_override_process_request

[->vendor\qcom\proprietary\chi-cdk\core\chiframework\chxextensioninterface.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// @brief Process request call
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
static CDKResult chi_override_process_request(
const camera3_device_t* camera3_device,
camera3_capture_request_t* capture_request,
void* priv)
{
ExtensionModule* pExtensionModule = ExtensionModule::GetInstance();

return pExtensionModule->OverrideProcessRequest(camera3_device, capture_request, priv);
}
4.4.1.5 OverrideProcessRequest

[->vendor\qcom\proprietary\chi-cdk\core\chiframework\chxextensionmodule.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// ExtensionModule::OverrideProcessRequest
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult ExtensionModule::OverrideProcessRequest(
const camera3_device_t* camera3_device,
camera3_capture_request_t* pCaptureRequest,
VOID* pPriv)
{
CDKResult result = CDKResultSuccess;

for (UINT32 i = 0; i < pCaptureRequest->num_output_buffers; i++)
{
if (NULL != pCaptureRequest->output_buffers)
{
ChxUtils::WaitOnAcquireFence(&pCaptureRequest->output_buffers[i]);

INT* pAcquireFence = (INT*)&pCaptureRequest->output_buffers[i].acquire_fence;

*pAcquireFence = -1;
}
}

UINT32 logicalCameraId = GetCameraIdfromDevice(camera3_device);
if (CDKInvalidId != logicalCameraId)
{
if (NULL != pCaptureRequest->settings)
{
FreeLastKnownRequestSetting(logicalCameraId);
m_pLastKnownRequestSettings[logicalCameraId] = allocate_copy_camera_metadata_checked(pCaptureRequest->settings,
get_camera_metadata_size(pCaptureRequest->settings));
}

// Set valid metadata after flush if settings aren't available
if ((TRUE == m_hasFlushOccurred[logicalCameraId]) &&
(NULL == pCaptureRequest->settings))
{
CHX_LOG_INFO("Setting Request to m_pLastKnownRequestSettings after flush for frame_number:%d",
pCaptureRequest->frame_number);
pCaptureRequest->settings = m_pLastKnownRequestSettings[logicalCameraId];
m_hasFlushOccurred[logicalCameraId] = FALSE;
}

if (TRUE == static_cast<BOOL>(ChxUtils::AtomicLoadU32(&m_aFlushInProgress[logicalCameraId])))
{
CHX_LOG_INFO("flush enabled, frame %d", pCaptureRequest->frame_number);
HandleProcessRequestErrorAllPCRs(pCaptureRequest, logicalCameraId);
return CDKResultSuccess;
}

if (ChxUtils::AndroidMetadata::IsLongExposureCapture(const_cast<camera_metadata_t*>(pCaptureRequest->settings)))
{
ChxUtils::AtomicStoreU32(&m_aLongExposureInProgress[logicalCameraId], TRUE);
m_longExposureFrame[logicalCameraId] = pCaptureRequest->frame_number;
CHX_LOG_INFO("Long exposure enabled in frame %d", pCaptureRequest->frame_number);
}

m_pRecoveryLock[logicalCameraId]->Lock();
if (TRUE == m_RecoveryInProgress[logicalCameraId])
{
CHX_LOG_INFO("Wait for recovery to finish, before proceeding with new request for cameraId: %d", logicalCameraId);
m_pRecoveryCondition[logicalCameraId]->Wait(m_pRecoveryLock[logicalCameraId]->GetNativeHandle());
}
m_pRecoveryLock[logicalCameraId]->Unlock();

// Save the original metadata
const camera_metadata_t* pOriginalMetadata = pCaptureRequest->settings;
(VOID)pPriv;

m_pPCRLock[logicalCameraId]->Lock();
if (NULL != m_pSelectedUsecase[logicalCameraId])
{
m_originalFrameWorkNumber[logicalCameraId] = pCaptureRequest->frame_number;

// Recovery happened if framework didn't send any metadata, send valid metadata
if (m_firstFrameAfterRecovery[logicalCameraId] == pCaptureRequest->frame_number &&
NULL == pCaptureRequest->settings)
{
CHX_LOG_INFO("Setting Request for first frame after case =%d", m_firstFrameAfterRecovery[logicalCameraId]);
pCaptureRequest->settings = m_pLastKnownRequestSettings[logicalCameraId];
m_firstFrameAfterRecovery[logicalCameraId] = 0;
}

if (pCaptureRequest->output_buffers != NULL)
{
for (UINT i = 0; i < pCaptureRequest->num_output_buffers; i++)
{
if ((NULL != m_pPerfLockManager[logicalCameraId]) &&
(pCaptureRequest->output_buffers[i].stream->format == ChiStreamFormatBlob) &&
((pCaptureRequest->output_buffers[i].stream->data_space ==
static_cast<android_dataspace_t>(DataspaceV0JFIF)) ||
(pCaptureRequest->output_buffers[i].stream->data_space ==
static_cast<android_dataspace_t>(DataspaceJFIF))))
{
m_pPerfLockManager[logicalCameraId]->AcquirePerfLock(PERF_LOCK_SNAPSHOT_CAPTURE, 2000);
break;
}

if ((NULL != m_pPerfLockManager[logicalCameraId]) &&
TRUE == UsecaseSelector::IsHEIFStream(pCaptureRequest->output_buffers[i].stream))
{
m_pPerfLockManager[logicalCameraId]->AcquirePerfLock(PERF_LOCK_SNAPSHOT_CAPTURE, 2000);
break;
}
}
}

result = m_pSelectedUsecase[logicalCameraId]->ProcessCaptureRequest(pCaptureRequest);
}

if (pCaptureRequest->settings != NULL)
{
// Restore the original metadata pointer that came from the framework
pCaptureRequest->settings = pOriginalMetadata;
}

// Need to return success on PCR to allow FW to continue sending requests
if (result == CDKResultEBusy)
{
result = CDKResultSuccess;
}

if (result == CamxResultECancelledRequest)
{
// Ignore the Failure if flush or recovery returned CamcelRequest
CHX_LOG("Flush/Recovery is in progress %d and so ignore failure", pCaptureRequest->frame_number);
result = CDKResultSuccess;
}

m_pPCRLock[logicalCameraId]->Unlock();
}
else
{
CHX_LOG_ERROR("Invalid logical camera id device:%p!!", camera3_device);
}

return result;
}
4.4.1.6 Usecase::ProcessCaptureRequest

[->vendor\qcom\proprietary\chi-cdk\core\chiframework\chxusecase.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// Usecase::ProcessCaptureRequest
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult Usecase::ProcessCaptureRequest(
camera3_capture_request_t* pRequest)
{
CDKResult result = CDKResultSuccess;
UINT32 chiOverrideFrameNum = GetChiOverrideFrameNum();
UINT32 resultFrameIndexChi = chiOverrideFrameNum % MaxOutstandingRequests;
UINT32 frameworkFrameNum = pRequest->frame_number;
camera3_capture_request* pPendingPCRSlot = &m_pendingPCRs[resultFrameIndexChi];
UINT32 cameraId = GetCameraId();

/// Chi override frame number is what the rest of the override module knows. The original application frame number is only
/// known to this class and no one else. Hence any result communication to application needs to go thru this class strictly

m_pMapLock->Lock();

if (FALSE == m_requestFlags[resultFrameIndexChi].isInErrorState)
{
// Old request not returned yet. Flush its result result first
if (0 != m_numberOfPendingOutputBuffers[resultFrameIndexChi])
{
CHX_LOG_ERROR("Chi Frame: %d hasn't returned a result and will be canceled in favor of Chi Frame: %d. Index: %d",
pPendingPCRSlot->frame_number,
pRequest->frame_number,
resultFrameIndexChi);
HandleProcessRequestError(pPendingPCRSlot);
}
else if (FALSE == m_requestFlags[resultFrameIndexChi].isOutputMetaDataSent)
{
CHX_LOG_ERROR("Pending metadata in PCR. ChiOverrideFrame: %d, Last Request Frame: %" PRIu64,
chiOverrideFrameNum - MaxOutstandingRequests,
m_lastAppRequestFrame);

// We reached max number of outstanding requests but metadata is not sent. Most probably errored out.
// Send Metadata error for this frame.

// release metadata also.
if (NULL != m_captureResult[resultFrameIndexChi].result)
{
m_pMetadataManager->ReleaseAndroidFrameworkOutputMetadata(
m_captureResult[resultFrameIndexChi].result);
}

HandleResultError(pPendingPCRSlot);
}
}

AssignChiOverrideFrameNum(pRequest->frame_number);
pRequest->frame_number = chiOverrideFrameNum;

CHX_LOG("Saving buffer for CHI Frame: %d, requestFrame: %d, NumBuff: %d resultFrameIndexChi: %d",
chiOverrideFrameNum,
frameworkFrameNum,
pRequest->num_output_buffers,
resultFrameIndexChi);

// Set pending output buffers after clearing the previous one
m_numAppPendingOutputBuffers[resultFrameIndexChi] = pRequest->num_output_buffers;
m_numberOfPendingOutputBuffers[resultFrameIndexChi] = pRequest->num_output_buffers;
m_numBufferErrorMessages[resultFrameIndexChi] = 0;

m_requestFlags[resultFrameIndexChi].value = 0; // Reset all flags
m_requestFlags[resultFrameIndexChi].isMessagePending = TRUE;
pPendingPCRSlot->frame_number = chiOverrideFrameNum;
pPendingPCRSlot->num_output_buffers = pRequest->num_output_buffers;
pPendingPCRSlot->input_buffer = pRequest->input_buffer;

if (InvalidFrameNumber == m_nextAppMessageFrame)
{
m_nextAppMessageFrame = chiOverrideFrameNum;
m_lastAppMessageFrameReceived = chiOverrideFrameNum;
}

if (InvalidFrameNumber == m_nextAppResultFrame)
{
CHX_SET_AND_LOG_UINT64(m_nextAppResultFrame, chiOverrideFrameNum);
m_nextAppMessageFrame = chiOverrideFrameNum;
m_lastAppRequestFrame = chiOverrideFrameNum;
m_lastResultMetadataFrameNum = m_nextAppMessageFrame - 1;
}

BOOL isSnapshotStream = FALSE;

for (UINT i = 0; i < pRequest->num_output_buffers; i++)
{
if (&pRequest->output_buffers[i] == NULL)
{
continue;
}
CHX_LOG("SAVING frame: %d resultFrameIndexChi: %d", frameworkFrameNum, resultFrameIndexChi);
ChxUtils::Memcpy(const_cast<camera3_stream_buffer_t*>(&pPendingPCRSlot->output_buffers[i]),
&pRequest->output_buffers[i],
sizeof(camera3_stream_buffer_t));
if ((UsecaseSelector::IsJPEGSnapshotStream(pRequest->output_buffers[i].stream)) ||
(UsecaseSelector::IsHEIFStream(pRequest->output_buffers[i].stream)))
{
isSnapshotStream = TRUE;
}
}

UINT32 isZSLMode = 0;
isZSLMode = ChxUtils::AndroidMetadata::GetZSLMode(const_cast<camera_metadata_t*>(pRequest->settings));

if ((TRUE == isSnapshotStream) && (1 == isZSLMode))
{
CHX_LOG_INFO("Frame: %u(idx: %u) is a Snapshot, Setting isSnapshotStream TRUE",
chiOverrideFrameNum, resultFrameIndexChi);
m_requestFlags[resultFrameIndexChi].isZSLMessageAvailable = TRUE;
}

m_pMapLock->Unlock();
ResetMetadataStatus(pRequest);
ChxUtils::AtomicStore64(&m_lastAppRequestFrame, chiOverrideFrameNum);

camera3_capture_result_t* pUsecaseResult = GetCaptureResult(resultFrameIndexChi);
pUsecaseResult->result = NULL;
pUsecaseResult->frame_number = pRequest->frame_number;
pUsecaseResult->num_output_buffers = 0;

if (FlushStatus::NotFlushing != GetFlushStatus())
{
CHX_LOG_INFO("Usecase is flushing. No requests will be generated for Framework Frame: %d", frameworkFrameNum);
HandleProcessRequestError(pPendingPCRSlot);
}
else
{
if (NULL != pRequest->settings)
{
// Replace the metadata by appending vendor tag for cropRegions
result = ReplaceRequestMetadata(pRequest->settings, cameraId);
if (CDKResultSuccess == result)
{
pRequest->settings = m_pReplacedMetadata;
}

// The translation must be done in base class as it is also required for default usecase,
// which is used for most CTS/ITS cases.
if ((NULL != m_pLogicalCameraInfo) &&
(TRUE == UsecaseSelector::IsQuadCFASensor(m_pLogicalCameraInfo, NULL)) &&
(FALSE == ExtensionModule::GetInstance()->ExposeFullsizeForQuadCFA()))
{
// map ROIs (aec/af/crop region) from binning active array size based to full active arrsy size based
result = OverrideInputMetaForQCFA(const_cast<camera_metadata_t*>(pRequest->settings));
}

if (CamxResultSuccess != result)
{
CHX_LOG_ERROR("OverrideInputMetaForQCFA Errored Out! Usecase:%d cameraId:%d in state: %s",
GetUsecaseId(), GetCameraId(), CamxResultStrings[result]);
}
}

result = ExecuteCaptureRequest(pRequest);

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("ECR Errored Out! Usecase:%d cameraId:%d in state: %s",
GetUsecaseId(), GetCameraId(), CamxResultStrings[result]);

if (CDKResultETimeout == result)
{
CHX_LOG_ERROR("Usecase:%d cameraId:%d timed out - returning success to trigger recovery",
GetUsecaseId(), GetCameraId());
result = CDKResultSuccess;
}
}

// Restore the original metadata
RestoreRequestMetadata(pRequest, cameraId);
}

// reset the frame number
pRequest->frame_number = frameworkFrameNum;
return result;
}

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// Usecase::ReplaceRequestMetadata
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult Usecase::ReplaceRequestMetadata(
const VOID* pMetadata,
UINT32 cameraId)
{
CDKResult result = CDKResultSuccess;

// Save the the original metadata
ExtensionModule::GetInstance()->SetOriginalMetadata(pMetadata, cameraId);

m_pReplacedMetadata = place_camera_metadata(m_pReplacedMetadata,
m_replacedMetadataSize,
ReplacedMetadataEntryCapacity,
ReplacedMetadataDataCapacity);

// Add the existing metadata first before appending the new tags
result = append_camera_metadata(m_pReplacedMetadata, static_cast<const camera_metadata_t*>(pMetadata));

if (CDKResultSuccess == result)
{
// Read the android crop region
camera_metadata_entry_t entry = { 0 };
if (0 == find_camera_metadata_entry(m_pReplacedMetadata, ANDROID_SCALER_CROP_REGION, &entry))
{
CaptureRequestCropRegions cropRegions;
cropRegions.userCropRegion.left = entry.data.i32[0];
cropRegions.userCropRegion.top = entry.data.i32[1];
cropRegions.userCropRegion.width = entry.data.i32[2];
cropRegions.userCropRegion.height = entry.data.i32[3];

CHIRECT* pUserZoom = reinterpret_cast<CHIRECT*>(&cropRegions.userCropRegion);
cropRegions.pipelineCropRegion = *pUserZoom;
cropRegions.ifeLimitCropRegion = *pUserZoom;

// Set the cropRegions vendor tag data
ChxUtils::AndroidMetadata::SetVendorTagValue(m_pReplacedMetadata, VendorTag::CropRegions,
sizeof(CaptureRequestCropRegions), &cropRegions);
}
}
return result;
}
4.4.1.7 ExecuteCaptureRequest

[->vendor\qcom\proprietary\chi-cdk\core\chiusecase\chxadvancedcamerausecase.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// AdvancedCameraUsecase::ExecuteCaptureRequest
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult AdvancedCameraUsecase::ExecuteCaptureRequest(
camera3_capture_request_t* pRequest)
{
CDKResult result = CDKResultSuccess;
UINT frameIndex = pRequest->frame_number % MaxOutstandingRequests;
Feature* pFeature = m_pActiveFeature;

CHX_LOG("AdvancedCameraUsecase::ExecuteCaptureRequest %u %u", pRequest->frame_number, frameIndex);

// swapping JPEG thumbnail size params
const ExtensionModule* pExtModule = ExtensionModule::GetInstance();
BOOL isGpuOverrideSetting = pExtModule->UseGPUDownscaleUsecase() || pExtModule->UseGPURotationUsecase();
if( (TRUE == m_GpuNodePresence) && (TRUE == isGpuOverrideSetting))
{
if (NULL != pRequest->settings)
{
// ANDROID_JPEG_ORIENTATION
INT32 JpegOrientation = 0;
CDKResult result = m_vendorTagOps.pGetMetaData(
const_cast<VOID*>
(reinterpret_cast<const VOID*>(pRequest->settings)),
ANDROID_JPEG_ORIENTATION,
&JpegOrientation,
sizeof(INT32));

if (CDKResultSuccess == result)
{
INT32* pIntentJpegSize = NULL;

if (JpegOrientation % 180)
{
JPEGThumbnailSize thumbnailSizeGet, thumbnailSizeSet;
CDKResult result = m_vendorTagOps.pGetMetaData(
const_cast<VOID*>
(reinterpret_cast<const VOID*>(pRequest->settings)),
ANDROID_JPEG_THUMBNAIL_SIZE,
&thumbnailSizeGet,
sizeof(JPEGThumbnailSize));

if (CDKResultSuccess == result)
{
thumbnailSizeSet.JpegThumbnailSize_0 = thumbnailSizeGet.JpegThumbnailSize_1;
thumbnailSizeSet.JpegThumbnailSize_1 = thumbnailSizeGet.JpegThumbnailSize_0;

CDKResult result = m_vendorTagOps.pSetMetaData(
const_cast<VOID*>
(reinterpret_cast<const VOID*>(pRequest->settings)),
ANDROID_JPEG_THUMBNAIL_SIZE,
&thumbnailSizeSet,
sizeof(JPEGThumbnailSize));

if (CDKResultSuccess == result)
{
CDKResult result = m_vendorTagOps.pGetMetaData(
const_cast<VOID*>
(reinterpret_cast<const VOID*>(pRequest->settings)),
ANDROID_JPEG_THUMBNAIL_SIZE,
&thumbnailSizeGet,
sizeof(JPEGThumbnailSize));
}
}
}
}
}
}
// exchange JPEG thumbnail

m_shutterTimestamp[frameIndex] = 0;

result = UpdateFeatureModeIndex(const_cast<camera_metadata_t*>(pRequest->settings));
if (TRUE ==
ChxUtils::AndroidMetadata::IsVendorTagPresent(reinterpret_cast<const VOID*>(pRequest->settings),
VendorTag::VideoHDR10Mode))
{
VOID* pData = NULL;
StreamHDRMode HDRMode = StreamHDRMode::HDRModeNone;
ChxUtils::AndroidMetadata::GetVendorTagValue(reinterpret_cast<const VOID*>(pRequest->settings),
VendorTag::VideoHDR10Mode,
reinterpret_cast<VOID**>(&pData));
if (NULL != pData)
{
HDRMode = *(static_cast<StreamHDRMode*>(pData));
if (StreamHDRMode::HDRModeHDR10 == HDRMode)
{
m_tuningFeature2Value = static_cast<UINT32>(ChiModeFeature2SubModeType::HDR10);
}
else if (StreamHDRMode::HDRModeHLG == HDRMode)
{
m_tuningFeature2Value = static_cast<UINT32>(ChiModeFeature2SubModeType::HLG);
}
else
{
m_tuningFeature2Value = 0;
}
}
}

if (StreamConfigModeFastShutter == ExtensionModule::GetInstance()->GetOpMode(m_cameraId) && NULL != pRequest->settings)
{
CHX_LOG("SetMetaData: StreamConfigModeFastShutter ");

UINT8 isFSModeVendorTag = 1;
UINT32 FSModeTagId = ExtensionModule::GetInstance()->GetVendorTagId(VendorTag::FastShutterMode);
result = m_vendorTagOps.pSetMetaData(
const_cast<VOID*>
(reinterpret_cast<const VOID*>(pRequest->settings)),
FSModeTagId,
&isFSModeVendorTag,
sizeof(isFSModeVendorTag));

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("pSetMetaData failed result %d", result);
}
}

if (TRUE == hasSnapshotStreamRequest(pRequest))
{
WaitForDeferThread();
}

ChxUtils::Memset(&m_snapshotFeatures[frameIndex], 0, sizeof(SnapshotFeatureList));

if (TRUE == AdvancedFeatureEnabled())
{
for (UINT32 i = 0; i < pRequest->num_output_buffers; i++)
{
if (m_pSnapshotStream == reinterpret_cast<CHISTREAM*>(pRequest->output_buffers[i].stream))
{
pFeature = SelectFeatureToExecuteCaptureRequest(pRequest, 0);
}
}

if (NULL != pFeature)
{
m_shutterTimestamp[frameIndex] = 0;
result = pFeature->ExecuteProcessRequest(pRequest);
}
}
else
{
CHX_LOG_INFO("CameraUsecaseBase::ExecuteCaptureRequest()");
result = CameraUsecaseBase::ExecuteCaptureRequest(pRequest);
}

return result;
}
4.4.1.8 CameraUsecaseBase::ExecuteCaptureRequest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// CameraUsecaseBase::ExecuteCaptureRequest
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CDKResult CameraUsecaseBase::ExecuteCaptureRequest(
camera3_capture_request_t* pRequest)
{
// Base implementation finds the buffers that go to each output and invokes SubmitRequest for each pipeline with outputs
// If the advanced class wishes to use this function, but not invoke all the pipelines, the output produced from the disired
// inactive pipeline should be removed from the pRequest->output_buffers
CDKResult result = CDKResultSuccess;

CHX_LOG("CameraUsecaseBase::ExecuteCaptureRequest for frame %d with %d output buffers",
pRequest->frame_number, pRequest->num_output_buffers);

static const UINT32 NumOutputBuffers = 5;

UINT frameIndex = pRequest->frame_number % MaxOutstandingRequests;

if (InvalidId != m_rtSessionIndex)
{
UINT32 rtPipeline = m_sessions[m_rtSessionIndex].rtPipelineIndex;

if (InvalidId != rtPipeline)
{
m_selectedSensorModeIndex =
m_sessions[m_rtSessionIndex].pipelines[rtPipeline].pPipeline->GetSensorModeInfo()->modeIndex;
result = UpdateSensorModeIndex(const_cast<camera_metadata_t*>(pRequest->settings));
}
}

for (UINT session = 0; session < MaxSessions; session++)
{
BOOL bIsOffline = FALSE;

for (UINT pipeline = 0; pipeline < m_sessions[session].numPipelines; pipeline++)
{
if (NULL != pRequest->input_buffer)
{
bIsOffline = TRUE;

result = WaitForDeferThread();

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Defer thread failure");
break;
}

// Skip submitting to realtime pipelines when an input buffer is provided
if (TRUE == m_sessions[session].pipelines[pipeline].pPipeline->IsRealTime())
{
continue;
}
}
else
{
// Skip submitting to offline pipelines when an input buffer is not provided
if (FALSE == m_sessions[session].pipelines[pipeline].pPipeline->IsRealTime())
{
continue;
}
}

CHISTREAMBUFFER outputBuffers[NumOutputBuffers] = { { 0 } };
UINT32 outputCount = 0;
PipelineData* pPipelineData = &m_sessions[session].pipelines[pipeline];

for (UINT32 buffer = 0; buffer < pRequest->num_output_buffers; buffer++)
{
for (UINT stream = 0; stream < pPipelineData->numStreams; stream++)
{
if ( (TRUE == bIsOffline) &&
(FALSE == m_sessions[session].pipelines[pipeline].pPipeline->IsRealTime()) &&
(TRUE == m_bCloningNeeded) )
{
UINT index = 0;
if (TRUE == IsThisClonedStream(m_pClonedStream, pPipelineData->pStreams[stream], &index))
{
if ((reinterpret_cast<CHISTREAM*>(pRequest->output_buffers[buffer].stream) ==
m_pFrameworkOutStreams[index]))
{
ChxUtils::PopulateHALToChiStreamBuffer(&pRequest->output_buffers[buffer],
&outputBuffers[outputCount]);
outputBuffers[outputCount].pStream = pPipelineData->pStreams[stream];
outputCount++;
}
}
}
else
{
if (reinterpret_cast<CHISTREAM*>(pRequest->output_buffers[buffer].stream) ==
pPipelineData->pStreams[stream])
{
ChxUtils::PopulateHALToChiStreamBuffer(&pRequest->output_buffers[buffer],
&outputBuffers[outputCount]);
outputCount++;
}
}
}
}

if (0 < outputCount)
{
CHICAPTUREREQUEST request = { 0 };
CHISTREAMBUFFER inputBuffer = { 0 };
UINT32 sensorModeIndex;

if (NULL != pRequest->input_buffer)
{
request.numInputs = 1;
ChxUtils::PopulateHALToChiStreamBuffer(pRequest->input_buffer, &inputBuffer);
request.pInputBuffers = &inputBuffer;
}

request.frameNumber = pRequest->frame_number;
request.hPipelineHandle = reinterpret_cast<CHIPIPELINEHANDLE>(
m_sessions[session].pSession->GetPipelineHandle());
request.numOutputs = outputCount;
request.pOutputBuffers = outputBuffers;
request.pPrivData = &m_privData[frameIndex];

UpdateMetadataBuffers(pRequest, pPipelineData->id, &request, session, pipeline, !bIsOffline);

CHIPIPELINEREQUEST submitRequest = { 0 };
submitRequest.pSessionHandle = reinterpret_cast<CHIHANDLE>(
m_sessions[session].pSession->GetSessionHandle());
submitRequest.numRequests = 1;
submitRequest.pCaptureRequests = &request;

m_numPCRsBeforeStreamOn = ExtensionModule::GetInstance()->GetNumPCRsBeforeStreamOn(m_cameraId);

if (1 > m_numPCRsBeforeStreamOn)
{
// Activate pipeline before submitting request when EarlyPCR disabled
result = CheckAndActivatePipeline(m_sessions[session].pSession);
}

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Activate Pipeline failure for session:%d pipeline %d", session, pipeline);
break;
}

CHX_LOG("Submitting request to Session %d Pipeline %d outputCount=%d", session, pipeline, outputCount);

CHX_LOG_REQMAP("frame: %u <==> (chiFrameNum) chiOverrideFrameNum: %" PRIu64,
GetAppFrameNum(request.frameNumber),
request.frameNumber);

result = SubmitRequest(&submitRequest);

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Submit request failure for session:%d", session);
break;
}

if (0 < m_numPCRsBeforeStreamOn)
{
// Activate pipeline after submitting request when EarlyPCR enabled
result = CheckAndActivatePipeline(m_sessions[session].pSession);
}

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Activate Pipeline failure for session:%d pipeline %d", session, pipeline);
break;
}

}
}

if (CDKResultSuccess != result)
{
CHX_LOG_ERROR("Defer thread or submit request failure for session:%d", session);
break;
}
}

return result;
}
4.4.1.9 ChiContext::ActivatePipeline

[->vendor\qcom\proprietary\camx\src\core\chi\camxchicontext.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// ChiContext::ActivatePipeline
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CamxResult ChiContext::ActivatePipeline(
CHISession* pChiSession,
CHIPIPELINEHANDLE hPipelineDescriptor)
{
CAMX_ASSERT(NULL != pChiSession);
CAMX_ASSERT(NULL != hPipelineDescriptor);

CamxResult result = CamxResultSuccess;

if (TRUE == pChiSession->UsingResourceManager(0))
{
ResourceID resourceId = static_cast<ResourceID>(ResourceType::RealtimePipeline);
GetResourceManager()->CheckAndAcquireResource(resourceId, static_cast<VOID*>(hPipelineDescriptor), 0);
}

result = pChiSession->StreamOn(hPipelineDescriptor);

return result;
}
4.4.1.10 小结

首先CamX中会将此次request转发到HALDevice中,再通过HALDevice对象调用之前初始化的时候获取的CHI部分的回调接口m_ChiAppCallbacks.chi_override_process_request方法(chi_override_process_request方法的定义位于chxextensioninterface.cpp中)将request发送到CHI部分。

在chi_override_process_request方法中会去获取ExtensionModule对象,并将request发送到ExtensionModule对象中,该对象中存储了之前创建的Usecase对象,然后经过层层调用,最终会调用AdvancedCameraUsecase的ExecuteCaptureRequest方法,该方法负责处理此次Request,具体流程如下:

在AdvancedCameraUsecase的ExecuteCaptureRequest中会有两个主要的分支来分别处理:

  • 如果当前并没有任何Feature需要实现,此时便会走默认流程,根据上面的流程图所示,这里会调用CameraUsecaseBase::ExecuteCaptureRequest方法,在该方法中,首先会将request取出,重新封装成CHICAPTUREREQUEST,然后调用CheckAndActivatePipeline方法唤醒pipeline,这一操作到最后会调到Session的StreamOn方法,在唤醒了pipeline之后,继续往下执行,再将封装后的Request发送到CamX中,最终调用到相应的Session::ProcessCaptureRequest方法,此时Request就进入到了Session内部进行流转了。
  • 如果当前场景需要实现某个Feature,则直接调用Feature的ExecuteProcessRequest方法将此次request送入Feature中处理,最后依然会调用到Session::StreamOn以及Session::ProcessCaptureRequest方法来分别完成唤醒pipeline以及下发request的到Session的操作。

该流程最终都会调用到两个比较关键的方法Session::StreamOn以及Session::ProcessCaptureRequest,接下来针对这两个方法重点介绍下:

4.4.2 Session::StreamOn

[->vendor\qcom\proprietary\camx\src\core\camxsession.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Session::StreamOn
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CamxResult Session::StreamOn(
CHIPIPELINEHANDLE hPipelineDescriptor)
{
UINT32 index = 0;
CamxResult result = CamxResultSuccess;

// input pipelineIndex not really match the index recorded by Session, so use Descriptor to find it.
for (index = 0; index < m_numPipelines; index++)
{
if (hPipelineDescriptor == m_pipelineData[index].pPipelineDescriptor)
{
// found corresponding pipeline can use index to get to it
break;
}
}

CAMX_ASSERT(index < m_numPipelines);

Pipeline* pPipeline = m_pipelineData[index].pPipeline;

m_pStreamOnOffLock->Lock();

if ((NULL != pPipeline) && (PipelineStatus::STREAM_ON != pPipeline->GetPipelineStatus()))
{
PipelineStatus pipelineStatus = pPipeline->GetPipelineStatus();

if (PipelineStatus::FINALIZED > pipelineStatus)
{
result = FinalizeDeferPipeline(index);
pipelineStatus = pPipeline->GetPipelineStatus();
CAMX_LOG_INFO(CamxLogGroupCore, "FinalizeDeferPipeline result: %d pipelineStatus: %d",
result, pipelineStatus);
}

if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "FinalizeDeferPipeline() unsuccessful, Session StreamOn() is failed !!");
pPipeline->ReleaseResources();
}
else
{
if (PipelineStatus::FINALIZED <= pipelineStatus)
{
result = pPipeline->StreamOn();

if (CamxResultSuccess == result)
{
if (TRUE == pPipeline->IsRealTime())
{
m_numStreamedOnRealtimePipelines++;

CheckAndSyncLinks();
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Pipeline %s failed to stream on.",
pPipeline->GetPipelineName());
}
}
}
}

m_pStreamOnOffLock->Unlock();
return result;
}

从方法名称基本可以知道该方法主要用于开始硬件的数据输出,具体点儿就是进行配置Sensor寄存器,让其开始出图,并且将当前的Session的状态告知每一Node,让它们在自己内部也做好处理数据的准备,所以之后的相关Request的流转都是以该方法为前提进行的,所以该方法重要性可见一斑,其操作流程见下图:

Session的StreamOn方法中主要做了如下两个工作:

  • 调用FinalizeDeferPipeline()方法,如果当前pipeline并未初始化,则会调用pipeline的FinalizePipeline方法,这里方法里面会去针对每一个从属于当前pipeline的Node依次做FinalizeInitialization、CreateBufferManager、NotifyPipelineCreated以及PrepareNodeStreamOn操作,FinalizeInitialization用于完成Node的初始化动作,NotifyPipelineCreated用于通知Node当前Pipeline的状态,此时Node内部可以根据自身的需要作相应的操作,PrepareNodeStreamOn方法的主要是完成Sensor以及IFE等Node的控制硬件模块出图前的配置,其中包括了曝光的参数的设置,CreateBufferManagers方法涉及到CamX-CHI中的一个非常重要的Buffer管理机制,用于Node的ImageBufferManager的创建,而该类用于管理Node中的output port的buffer申请/流转/释放等操作。
  • 调用Pipeline的StreamOn方法,里面会进一步通知CSL部分开启数据流,并且调用每一个Node的OnNodeStreamOn方法,该方法会去调用ImageBufferManager的Activate(),该方法里面会去真正分配用于装载图像数据的buffer,之后会去调用CHI部分实现的用户自定义的Node的pOnStreamOn方法,用户可以在该方法中做一些自定义的操作。

4.4.3 Session::ProcessCaptureRequest

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Session::ProcessCaptureRequest
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CamxResult Session::ProcessCaptureRequest(
const ChiPipelineRequest* pPipelineRequests)
{
CamxResult result = CamxResultSuccess;

UINT numRequests = pPipelineRequests->numRequests;
UINT32 pipelineIndexes[MaxPipelinesPerSession];

const StaticSettings* pStaticSettings = m_pChiContext->GetStaticSettings();

CAMX_ASSERT(NULL != pPipelineRequests);
CAMX_ASSERT(NULL != pStaticSettings);

// Prepare info for each request on each pipeline
for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
// input pipelineIndex not really match the index recorded by Session, so use GetPipelineIndex to get corresponding
// pipeline index.
pipelineIndexes[requestIndex] = GetPipelineIndex(pPipelineRequests->pCaptureRequests[requestIndex].hPipelineHandle);
CAMX_LOG_VERBOSE(CamxLogGroupCore,
"Received(%d/%d) for framework framenumber %llu, num outputs %d on %s: PipelineStatus:%d",
requestIndex+1,
numRequests,
pPipelineRequests->pCaptureRequests[requestIndex].frameNumber,
pPipelineRequests->pCaptureRequests[requestIndex].numOutputs,
m_pipelineData[pipelineIndexes[requestIndex]].pPipeline->GetPipelineIdentifierString(),
m_pipelineData[pipelineIndexes[requestIndex]].pPipeline->GetPipelineStatus());
}

if (CamxResultSuccess != m_pFlushLock->TryLock())
{
// Prepare capture result with request error for the pipeline requests but don't
// dispatch it immediately as it would be dispatched at the end of flush call. This will
// ensure that all capture results of the cancelled requests are dispatched first before
// returning the control to the caller.
// Decrementing live pending and sending processing done notification to allow flush to
// process this inflight request's result after handling all enqueued requests
m_pLivePendingRequestsLock->Lock();
PrepareChiRequestErrorForInflightRequests(pPipelineRequests);
m_pLivePendingRequestsLock->Unlock();

NotifyProcessingDone();

m_pFlushDoneLock->Lock();
while (TRUE == GetFlushStatus())
{
// Block the thread and send result after flush is done
result = m_pWaitForFlushDone->TimedWait(m_pFlushDoneLock->GetNativeHandle(), MaxWaitTimeForFlush);
}
m_pFlushDoneLock->Unlock();

if (CamxResultSuccess != result)
{
CAMX_LOG_WARN(CamxLogGroupCore, "Flush done timed out for session: %p, but returing success as results"
" should be processed!!", this);
// returning success as result of this request should have already
// been dispatched at the end of flush
result = CamxResultSuccess;
}

return result;
}

m_pFlushLock->Unlock();

m_pLivePendingRequestsLock->Lock();

CAMX_ASSERT(m_maxLivePendingRequests > 0);

while (m_livePendingRequests >= m_maxLivePendingRequests - 1)
{
if (TRUE == m_aDeviceInError)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Device in error state, returning failure for session:%p", this);
m_pLivePendingRequestsLock->Unlock();
return CamxResultEFailed;
}

if (TRUE == GetSessionTriggeringRecovery())
{
CAMX_LOG_WARN(CamxLogGroupCore, "Session %p triggering recovery, cancelling new PCR", this);
m_pLivePendingRequestsLock->Unlock();
return CamxResultECancelledRequest;
}

UINT waitTime = LivePendingRequestTimeoutDefault;

if (m_sequenceId < m_maxLivePendingRequests * 2)
{
waitTime = LivePendingRequestTimeoutDefault + (m_maxLivePendingRequests * LivePendingRequestTimeOutExtendor);
}

UINT32 additionalExposureTime = CamxAtomicLoadU32(&m_aTotalLongExposureTimeout);
if (TRUE == m_additionalWaitTimeForLivePending)
{
// after flush if current exposureTime used by sensor is more than the requested exposure time
// then need to wait according to current exposureTime use by sensor. Because sensor doesnot
// use requested exposure time upto first 3 frame(pipeline delay).
if (additionalExposureTime < m_currExposureTimeUseBySensor)
{
additionalExposureTime = m_currExposureTimeUseBySensor;
}
additionalExposureTime = additionalExposureTime * 3;
m_additionalWaitTimeForLivePending = FALSE;
}
waitTime = static_cast<UINT>(additionalExposureTime) + waitTime;

CAMX_LOG_VERBOSE(CamxLogGroupCore,
"Timed Wait Live Pending Requests(%u) "
"Sequence Id %u "
"Live Pending Requests %u "
"Max Live Pending Requests %u "
"Live Pending Request TimeOut Extendor %u",
waitTime,
m_sequenceId,
m_livePendingRequests,
m_maxLivePendingRequests,
LivePendingRequestTimeOutExtendor);

result = m_pWaitLivePendingRequests->TimedWait(m_pLivePendingRequestsLock->GetNativeHandle(), waitTime);

CAMX_LOG_VERBOSE(CamxLogGroupCore,
"Timed Wait Live Pending Requests(%u) ...DONE result %s",
waitTime,
CamxResultStrings[result]);

if (CamxResultSuccess != result)
{
break;
}
}

if (CamxResultSuccess != result)
{
m_pLivePendingRequestsLock->Unlock();
if (TRUE == pStaticSettings->enableRecovery)
{
if (TRUE == pStaticSettings->raiserecoverysigabrt)
{
DumpSessionState(SessionDumpFlag::ResetRecovery);
CAMX_LOG_ERROR(CamxLogGroupCore, "FATAL ERROR: Raise SigAbort to debug the root cause of HAL recovery");
OsUtils::RaiseSignalAbort();
}
else
{
CAMX_LOG_CONFIG(CamxLogGroupCore, "Lets do a Reset:UMD");
// Set recovery status to TRUE
SetSessionTriggeringRecovery(TRUE);

NotifyPipelinesOfTriggeringRecovery(TRUE);

DumpSessionState(SessionDumpFlag::ResetUMD);
return CamxResultETimeout;
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "HAL Recovery is disabled, cannot trigger");
return CamxResultEFailed;
}
}

for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
m_pipelineData[pipelineIndexes[requestIndex]].pPipeline->IncrementLivePendingRequest();
CAMX_LOG_VERBOSE(CamxLogGroupCore, "Framework frame number: %llu Pipeline: %s LivePendingRequests: %d",
pPipelineRequests->pCaptureRequests[requestIndex].frameNumber,
m_pipelineData[pipelineIndexes[requestIndex]].pPipeline->GetPipelineIdentifierString(),
m_pipelineData[pipelineIndexes[requestIndex]].pPipeline->GetLivePendingRequest());
}

m_livePendingRequests++;
m_pLivePendingRequestsLock->Unlock();

if (CamxResultSuccess != m_pFlushLock->TryLock())
{
// Prepare capture result with request error for the pipeline requests but don't
// dispatch it immediately as it would be dispatched at the end of flush call. This will
// ensure that all capture results of the cancelled requests are dispatched first before
// returning the control to the caller.
// Decrementing live pending and sending processing done notification to allow flush to
// process this inflight request's result after handling all enqueued requests
m_pLivePendingRequestsLock->Lock();
PrepareChiRequestErrorForInflightRequests(pPipelineRequests);
PipelinesInflightRequestsNotification(pPipelineRequests);
m_livePendingRequests--;
m_pLivePendingRequestsLock->Unlock();

NotifyProcessingDone();

m_pFlushDoneLock->Lock();
while (TRUE == GetFlushStatus())
{
// Block the thread and send result after flush is done
result = m_pWaitForFlushDone->TimedWait(m_pFlushDoneLock->GetNativeHandle(), MaxWaitTimeForFlush);
}
m_pFlushDoneLock->Unlock();

if (CamxResultSuccess != result)
{
CAMX_LOG_WARN(CamxLogGroupCore, "Flush done timed out for session: %p, but returing success as results"
" should be processed!!", this);
// returning success as result of this request should have already
// been dispatched at the end of flush
result = CamxResultSuccess;
}

return result;
}

// If it reaches here flush lock should already be taken

// Block process request while stream on in progress
m_pStreamOnOffLock->Lock();

ChiCaptureRequest requests[MaxPipelinesPerSession];
m_captureRequest.numRequests = numRequests;

if (MaxRealTimePipelines > m_numRealtimePipelines)
{
// In single camera use case, one CHI request should have only one request per pipeline so that incoming requests will
// not be more than m_requestQueueDepth and the only exception is in Dual Camera use case to have two requests
if (2 <= numRequests)
{
CAMX_LOG_WARN(CamxLogGroupCore, "In batch mode, number of pipeline requests are more than 1");
}
}

SyncProcessCaptureRequest(pPipelineRequests, pipelineIndexes);

for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
// check resource availability before enqueue to requestQueue
if ((CamxResultSuccess == result) &&
(TRUE == UsingResourceManager(pipelineIndexes[requestIndex])))
{
ResourceID resourceId = static_cast<ResourceID>(ResourceType::RealtimePipeline);

m_pChiContext->GetResourceManager()->AddResourceReference(resourceId,
static_cast<VOID*>(m_pipelineData[pipelineIndexes[requestIndex]].pPipelineDescriptor), 0);
}
}

for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
const ChiCaptureRequest* pCaptureRequest = &(pPipelineRequests->pCaptureRequests[requestIndex]);
UINT32 pipelineIndex = pipelineIndexes[requestIndex];
Pipeline* pPipeline = m_pipelineData[pipelineIndex].pPipeline;
MetadataPool* pPerFrameInputPool = NULL;
MetadataPool* pPerFrameResultPool = NULL;
MetadataPool* pPerFrameInternalPool = NULL;
MetadataPool* pPerFrameEarlyResultPool = NULL;
MetadataPool* pPerUsecasePool = NULL;
MetaBuffer* pInputMetabuffer = NULL;
MetaBuffer* pOutputMetabuffer = NULL;

if (NULL == pPipeline)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "pPipeline is NULL, pipelineIndex %u requestIndex %u",
pipelineIndex, requestIndex);
result = CamxResultEFailed;
}
else if (PipelineStatus::FINALIZED > pPipeline->GetPipelineStatus())
{
result = FinalizeDeferPipeline(pipelineIndex);
if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "%s: FinalizeDeferPipeline failed pipelineIndex %u PipelineName: %s"
"result: %s", m_pipelineNames, pipelineIndex, pPipeline->GetPipelineName(),
CamxResultStrings[result]);
pPipeline->ReleaseResources();
break;
}
}

if (CamxResultSuccess == result)
{
pPerFrameInputPool = pPipeline->GetPerFramePool(PoolType::PerFrameInput);
pPerFrameResultPool = pPipeline->GetPerFramePool(PoolType::PerFrameResult);
pPerFrameInternalPool = pPipeline->GetPerFramePool(PoolType::PerFrameInternal);
pPerFrameEarlyResultPool = pPipeline->GetPerFramePool(PoolType::PerFrameResultEarly);
pPerUsecasePool = pPipeline->GetPerFramePool(PoolType::PerUsecase);
}

if ((NULL != pPerFrameEarlyResultPool) &&
(NULL != pPerFrameInputPool) &&
(NULL != pPerFrameResultPool) &&
(NULL != pPerFrameInternalPool) &&
(NULL != pPerUsecasePool))
{
// Replace the incoming frameNumber with m_sequenceId to protect against sparse input frameNumbers
CamX::Utils::Memcpy(&requests[requestIndex], pCaptureRequest, sizeof(ChiCaptureRequest));

pInputMetabuffer = reinterpret_cast<MetaBuffer*>(requests[requestIndex].pInputMetadata);
pOutputMetabuffer = reinterpret_cast<MetaBuffer*>(requests[requestIndex].pOutputMetadata);

requests[requestIndex].frameNumber = m_sequenceId;
m_sequenceId++;

result = CanRequestProceed(&requests[requestIndex]);

if (CamxResultSuccess == result)
{
result = WaitOnAcquireFence(&requests[requestIndex]);

if (CamxResultSuccess == result)
{
// Finally copy and enqueue the request and fire the threadpool

// Set the expected exposure time to the current default
UINT32 expectedExposureTime = 0;

// m_batchedFrameIndex of respective pipelines should be less than m_usecaseNumBatchedFrames
CAMX_ASSERT(m_batchedFrameIndex[pipelineIndex] < m_usecaseNumBatchedFrames);

// m_batchedFrameIndex 0 implies a new requestId must be generated - irrespective of batching
// ON/OFF status
if (0 == m_batchedFrameIndex[pipelineIndex])
{
m_requestBatchId[pipelineIndex]++;

CAMX_ASSERT(m_usecaseNumBatchedFrames >= m_captureRequest.requests[requestIndex].numBatchedFrames);
CaptureRequest::PartialClearData(&m_captureRequest.requests[requestIndex]);

m_captureRequest.requests[requestIndex].requestId = m_requestBatchId[pipelineIndex];
m_captureRequest.requests[requestIndex].pMultiRequestData =
&m_requestSyncData[(m_syncSequenceId) % MaxQueueDepth];
CAMX_LOG_VERBOSE(CamxLogGroupCore, "m_syncSequenceId:%d", m_syncSequenceId);
CAMX_LOG_VERBOSE(CamxLogGroupCore, "%s is handling RequestID:%llu whose PeerRequestID:%llu"
" m_syncSequenceId:%llu",
pPipeline->GetPipelineIdentifierString(),
m_captureRequest.requests[requestIndex].requestId,
m_requestSyncData[(m_syncSequenceId) % MaxQueueDepth],
m_syncSequenceId);

pPerFrameInputPool->Invalidate(m_requestBatchId[pipelineIndex]);
pPerFrameResultPool->Invalidate(m_requestBatchId[pipelineIndex]);
pPerFrameEarlyResultPool->Invalidate(m_requestBatchId[pipelineIndex]);
pPerFrameInternalPool->Invalidate(m_requestBatchId[pipelineIndex]);

pPerFrameInputPool->UpdateRequestId(m_requestBatchId[pipelineIndex]);
pPerFrameResultPool->UpdateRequestId(m_requestBatchId[pipelineIndex]);
pPerFrameEarlyResultPool->UpdateRequestId(m_requestBatchId[pipelineIndex]);
pPerFrameInternalPool->UpdateRequestId(m_requestBatchId[pipelineIndex]);
m_ppPerFrameDebugDataPool[pipelineIndex]->UpdateRequestId(m_requestBatchId[pipelineIndex]);

if (TRUE == UseInternalDebugDataMemory())
{
// Assign debug-data memory to the next request
VOID* pSlotDebugData = NULL;
VOID* pBlobDebugData = NULL;
MetadataSlot* pDebugDataPoolSlot =
m_ppPerFrameDebugDataPool[pipelineIndex]->GetSlot(m_requestBatchId[pipelineIndex]);

result = GetDebugDataForSlot(&pSlotDebugData);
if (CamxResultSuccess == result)
{
result = pDebugDataPoolSlot->GetPropertyBlob(&pBlobDebugData);
}
if (CamxResultSuccess == result)
{
CAMX_LOG_VERBOSE(CamxLogGroupDebugData,
"Assigning DebugData for request: %llu, debug-data: %p, pBlobDebugData: %p",
m_requestBatchId[pipelineIndex], pSlotDebugData, pBlobDebugData);
result = InitDebugDataSlot(pBlobDebugData, pSlotDebugData);
}

if (CamxResultSuccess != result)
{
// Debug-Data framework failures are non-fatal
CAMX_LOG_WARN(CamxLogGroupDebugData, "Fail to add debug-data to slot");
result = CamxResultSuccess;
}

}
else
{
// Assign debug-data memory to the next request
VOID* pBlobDebugData = NULL;
MetadataSlot* pDebugDataPoolSlot =
m_ppPerFrameDebugDataPool[pipelineIndex]->GetSlot(m_requestBatchId[pipelineIndex]);

pDebugDataPoolSlot->GetPropertyBlob(&pBlobDebugData);
CAMX_LOG_VERBOSE(CamxLogGroupDebugData,
"Not setting debug-data: RT: %u request[%u]: %llu pBlob: %p : pSlot: %p",
m_isRealTime,
pipelineIndex,
m_requestBatchId[pipelineIndex],
pBlobDebugData);
}

if (TRUE == pStaticSettings->logMetaEnable)
{
CAMX_LOG_META("+----------------------------------------------------");
CAMX_LOG_META("| Input metadata for request: %lld", m_requestBatchId[pipelineIndex]);
CAMX_LOG_META("| %d entries", pInputMetabuffer->Count());
CAMX_LOG_META("+----------------------------------------------------");

pInputMetabuffer->PrintDetails();
}


MetadataSlot* pMetadataSlot = pPerFrameInputPool->GetSlot(m_requestBatchId[pipelineIndex]);
MetadataSlot* pResultMetadataSlot = pPerFrameResultPool->GetSlot(m_requestBatchId[pipelineIndex]);
MetadataSlot* pUsecasePoolSlot = pPerUsecasePool->GetSlot(0);

if (pMetadataSlot != NULL)
{
result = pMetadataSlot->AttachMetabuffer(pInputMetabuffer);

if (CamxResultSuccess == result)
{
result = pResultMetadataSlot->AttachMetabuffer(pOutputMetabuffer);

if (CamxResultSuccess == result)
{
UINT dumpMetadata = HwEnvironment::GetInstance()->GetStaticSettings()->dumpMetadata;

CHAR metadataFileName[FILENAME_MAX];

dumpMetadata &= (TRUE == pPipeline->IsRealTime()) ? RealTimeMetadataDumpMask
: OfflineMetadataDumpMask;

if (0 != (dumpMetadata & 0x3))
{
OsUtils::SNPrintF(metadataFileName, FILENAME_MAX, "inputMetadata_%s_%5d.txt",
pPipeline->GetPipelineIdentifierString(),
m_requestBatchId[pipelineIndex]);

pInputMetabuffer->DumpDetailsToFile(metadataFileName);
}
else if (0 != ((dumpMetadata>>2) & 0x3))
{
OsUtils::SNPrintF(metadataFileName, FILENAME_MAX, "inputMetadata_%s_%5d.bin",
pPipeline->GetPipelineIdentifierString(),
m_requestBatchId[pipelineIndex]);

pInputMetabuffer->BinaryDump(metadataFileName);
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Error attach output failed for slot %d",
m_requestBatchId[pipelineIndex]);
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Error attach input failed for slot %d",
m_requestBatchId[pipelineIndex]);
}
CAMX_LOG_INFO(CamxLogGroupCore, "AttachMetabuffer in pipeline %s InputMetaBuffer %p "
"OutputMetaBuffer %p reqId %llu",
pPipeline->GetPipelineIdentifierString(),
pInputMetabuffer,
pOutputMetabuffer,
m_requestBatchId[pipelineIndex]);

if (CamxResultSuccess == result)
{

INT64* pExposurePriority = NULL;
UINT64* pSensorExposureTime = static_cast<UINT64*>(pMetadataSlot->GetMetadataByTag(
SensorExposureTime));

UINT64* pSensorFrameDurationTime = static_cast<UINT64*>(pMetadataSlot->GetMetadataByTag(
SensorFrameDuration));

INT32* pExposurePriorityMode = static_cast<INT32*>(pMetadataSlot->GetMetadataByTag(
m_exposurePriorityModeTagId));

UINT8* pAEMode = static_cast<UINT8*>(pMetadataSlot->GetMetadataByTag(
ControlAEMode));
UINT8* pControlMode = static_cast<UINT8*>(pMetadataSlot->GetMetadataByTag(
ControlMode));

UINT32 exposurePriorityTagId = 0;
CDKResult resultCode = VendorTagManager::QueryVendorTagLocation(
"org.codeaurora.qcamera3.iso_exp_priority",
"use_iso_exp_priority",
&exposurePriorityTagId);

ControlAEModeValues AEMode = ControlAEModeValues::ControlAEModeEnd;
ControlModeValues controlMode = ControlModeValues::ControlModeEnd;

if (CDKResultSuccess == resultCode)
{
pExposurePriority = static_cast<INT64*>(pMetadataSlot->GetMetadataByTag(
exposurePriorityTagId));
}

if ((NULL != pControlMode) && (NULL != pAEMode))
{
AEMode = *(reinterpret_cast<ControlAEModeValues*>(pAEMode));
controlMode = *(reinterpret_cast<ControlModeValues*>(pControlMode));
}

INT32 exposureProrityMode = *pExposurePriorityMode;

if ((1 == exposureProrityMode) && (NULL != pExposurePriority))
{
expectedExposureTime = static_cast<UINT32>((*pExposurePriority) /
static_cast<UINT64>(1000000));
}
else if (NULL != pSensorExposureTime)
{
UINT32 exposureTimeInMs =
static_cast<UINT32>((*pSensorExposureTime) / static_cast<UINT64>(1000000));
if ( (exposureTimeInMs > expectedExposureTime) &&
(TRUE == m_isRealTime) &&
(TRUE == pStaticSettings->extendedTimeForLongExposure) &&
((ControlModeValues::ControlModeOff == controlMode) ||
(ControlAEModeValues::ControlAEModeOff == AEMode)))
{
expectedExposureTime = exposureTimeInMs;
}

if ((NULL != pSensorFrameDurationTime) && (NULL != pExposurePriorityMode))
{
UINT32 sensorDurationTimeInMs =
static_cast<UINT32>((*pSensorFrameDurationTime) / static_cast<UINT64>(1000000));

if ((1 == exposureProrityMode) && (ControlAEModeValues::ControlAEModeOn == AEMode))
{
expectedExposureTime =
CamX::Utils::MaxUINT32(sensorDurationTimeInMs, expectedExposureTime);
}
}
}

// m_batchedFrameIndex of 0 implies batching may be switched ON/OFF starting from this frame
if (TRUE == IsUsecaseBatchingEnabled())
{
RangeINT32* pFPSRange = static_cast<RangeINT32*>(pMetadataSlot->GetMetadataByTag(
ControlAETargetFpsRange));

// Must have been filled by GetMetadataByTag()
CAMX_ASSERT(NULL != pFPSRange);

BOOL hasBatchingModeChanged = FALSE;

if ((NULL != pFPSRange) && (pFPSRange->min == pFPSRange->max))
{
if (FALSE == m_isRequestBatchingOn)
{
hasBatchingModeChanged = TRUE;
}

m_isRequestBatchingOn = TRUE;
}
else
{
if (TRUE == m_isRequestBatchingOn)
{
hasBatchingModeChanged = TRUE;
}

m_isRequestBatchingOn = FALSE;
}

// If batching mode changes from ON to OFF or OFF to ON we need to dynamically adjust
// m_requestQueueDepth - because m_requestQueueDepth is different with batching ON or
// OFF With batching OFF it is RequestQueueDepth and with ON it is
// "RequestQueueDepth * usecaseNumBatchedFrames"
if (TRUE == hasBatchingModeChanged)
{
// Before changing m_requestQueueDepth, we need to make sure:
// 1. All the current pending requests are processed by the Pipeline
// 2. All the results for all those processed requests are sent back to the
// framework
//
// (1) is done by waiting for the request queue to become empty
// (2) is done by waiting on a condition variable that is signaled when all results
// are sent back to the framework
m_pRequestQueue->WaitEmpty();

m_pLivePendingRequestsLock->Lock();
m_livePendingRequests--;
m_pLivePendingRequestsLock->Unlock();

if (CamxResultSuccess != WaitTillAllResultsAvailable())
{
CAMX_LOG_WARN(CamxLogGroupCore,
"Failed to drain on batching mode change, calling flush");
Flush();
}

m_pLivePendingRequestsLock->Lock();
m_livePendingRequests++;
m_pLivePendingRequestsLock->Unlock();

// The request and result queues are completely empty at this point, and this
// function is the only thing that can add to the request queue. Safe to change
// m_requestQueueDepth at this point
if (TRUE == m_isRequestBatchingOn)
{
m_requestQueueDepth =
DefaultRequestQueueDepth *
GetBatchedHALOutputNum(m_usecaseNumBatchedFrames, m_HALOutputBufferCombined);
m_maxLivePendingRequests =
m_defaultMaxLivePendingRequests *
GetBatchedHALOutputNum(m_usecaseNumBatchedFrames, m_HALOutputBufferCombined);
}
else
{
m_requestQueueDepth = DefaultRequestQueueDepth;
m_maxLivePendingRequests = m_defaultMaxLivePendingRequests;
}
}
else
{
// Need to set default value if batch mode is enabled but request batching is off.
// In this case we have only preview reqest.
if (FALSE == m_isRequestBatchingOn)
{
m_requestQueueDepth = DefaultRequestQueueDepth;
m_maxLivePendingRequests = m_defaultMaxLivePendingRequests;
}
}
}

if ((0 != m_recordingEndOfStreamTagId) && (0 != m_recordingEndOfStreamRequestIdTagId))
{
UINT8* pRecordingEndOfStream = static_cast<UINT8*>(pMetadataSlot->GetMetadataByTag(
m_recordingEndOfStreamTagId));

if ((FALSE == pStaticSettings->disableDRQPreemptionOnStopRecord) &&
((NULL != pRecordingEndOfStream) && (0 != *pRecordingEndOfStream)))
{
UINT64 requestId = m_requestBatchId[pipelineIndex];
CAMX_LOG_INFO(CamxLogGroupCore, "Recording stopped on reqId %llu", requestId);

UINT32 endOfStreamRequestIdTag = m_recordingEndOfStreamRequestIdTagId;

pUsecasePoolSlot->SetMetadataByTag(endOfStreamRequestIdTag,
static_cast<VOID*>(&requestId),
sizeof(requestId),
"camx_session");

pUsecasePoolSlot->PublishMetadataList(&endOfStreamRequestIdTag, 1);

m_setVideoPerfModeFlag = TRUE;
m_pDeferredRequestQueue->SetPreemptDependencyFlag(TRUE);
m_pDeferredRequestQueue->DispatchReadyNodes();
}
else
{
m_setVideoPerfModeFlag = FALSE;
m_pDeferredRequestQueue->SetPreemptDependencyFlag(FALSE);
}
}
else
{
CAMX_LOG_INFO(CamxLogGroupCore, "No stop recording vendor tags");
}

ControlCaptureIntentValues* pCaptureIntent = static_cast<ControlCaptureIntentValues*>(
pMetadataSlot->GetMetadataByTag(ControlCaptureIntent));

// Update dynamic pipeline depth metadata which is required in capture result.
pResultMetadataSlot->SetMetadataByTag(RequestPipelineDepth,
static_cast<VOID*>(&(m_requestQueueDepth)),
1,
"camx_session");

if (NULL != pCaptureIntent)
{
// Copy Intent to result
result = pResultMetadataSlot->SetMetadataByTag(ControlCaptureIntent, pCaptureIntent, 1,
"camx_session");
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Couldn't copy request metadata!");
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore,
"Couldn't get metadata slot for request id: %d",
requests[requestIndex].frameNumber);

result = CamxResultEFailed;
}

// Get the per frame sensor mode index
UINT* pSensorModeIndex = NULL;

if (m_vendorTagSensorModeIndex > 0)
{
if (NULL != pMetadataSlot)
{
pSensorModeIndex = static_cast<UINT*>(pMetadataSlot->GetMetadataByTag(
m_vendorTagSensorModeIndex));
}

if (NULL != pSensorModeIndex)
{
pResultMetadataSlot->WriteLock();

pStaticSettings = HwEnvironment::GetInstance()->GetStaticSettings();

if (TRUE == pStaticSettings->perFrameSensorMode)
{
pResultMetadataSlot->SetMetadataByTag(PropertyIDSensorCurrentMode, pSensorModeIndex, 1,
"camx_session");
pResultMetadataSlot->PublishMetadata(PropertyIDSensorCurrentMode);
}

pResultMetadataSlot->Unlock();
}
}

// Check and update, if the preview stream is present in this request
if (m_previewStreamPresentTagId > 0)
{
BOOL isPreviewPresent = FALSE;
for (UINT32 i = 0; i < pCaptureRequest->numOutputs; i++)
{
CHISTREAM* pStream = pCaptureRequest->pOutputBuffers[i].pStream;
ChiStreamWrapper* pChiStream = static_cast<ChiStreamWrapper*>(pStream->pPrivateInfo);
if ((NULL != pChiStream) && (TRUE == pChiStream->IsPreviewStream()))
{
isPreviewPresent = TRUE;
break;
}
}
// Update the metadata tag.
pResultMetadataSlot->WriteLock();
pResultMetadataSlot->SetMetadataByTag(m_previewStreamPresentTagId,
static_cast<VOID*>(&(isPreviewPresent)),
1,
"camx_session");
pResultMetadataSlot->PublishMetadata(m_previewStreamPresentTagId);
pResultMetadataSlot->Unlock();
}

}

if (CamxResultSuccess == result)
{
/// Adding 1 to avoid 0 as 0 is flagged as invalid
UINT64 cslsyncid = pCaptureRequest->frameNumber + 1;
CaptureRequest* pRequest = &(m_captureRequest.requests[requestIndex]);
UINT batchedFrameIndex = m_batchedFrameIndex[pipelineIndex];
ChiStreamWrapper* pChiStreamWrapper = NULL;
ChiStream* pChiStream = NULL;

m_lastCSLSyncId = cslsyncid;
pRequest->CSLSyncID = cslsyncid;
pRequest->expectedExposureTime = expectedExposureTime;
pRequest->pPrivData = pCaptureRequest->pPrivData;
pRequest->pStreamBuffers[batchedFrameIndex].originalFrameworkNumber = pCaptureRequest->frameNumber;
pRequest->pStreamBuffers[batchedFrameIndex].numInputBuffers = requests[requestIndex].numInputs;

pRequest->pStreamBuffers[batchedFrameIndex].sequenceId =
static_cast<UINT32>(requests[requestIndex].frameNumber);

for (UINT i = 0; i < requests[requestIndex].numInputs; i++)
{
/// @todo (CAMX-1015): Avoid this memcpy.
Utils::Memcpy(&pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].inputBuffer,
&requests[requestIndex].pInputBuffers[i],
sizeof(ChiStreamBuffer));

pChiStream = reinterpret_cast<ChiStream*>(
pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].inputBuffer.pStream);

if (NULL != pChiStream)
{
pChiStreamWrapper = reinterpret_cast<ChiStreamWrapper*>(pChiStream->pPrivateInfo);

if (pChiStreamWrapper != NULL)
{
CamxAtomicStoreU(&pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].fenceRefCount,
pChiStreamWrapper->GetNumberOfPortId());
}
else
{
CamxAtomicStoreU(&pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].fenceRefCount,
1);
}

if (0 == pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].fenceRefCount)
{
CAMX_LOG_WARN(CamxLogGroupCore, "fenceRefCount shouldn't be zero for input buffer");

// Assigning fence count to 1 for input buffer if it is 0.
// this is to avoid any regressions due to source port sharing.
pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].fenceRefCount = 1;
}

// Below check is ideally not required but to avoid
// regressions making it applicable to only MFNR/MFSR
if (requests[requestIndex].numInputs > 1)
{

if (pChiStreamWrapper != NULL)
{
pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].portId =
pChiStreamWrapper->GetPortId();
}
else
{
if(i==0)
pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].portId = 0;
if(i==1)
pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].portId = 3;
}
CAMX_LOG_VERBOSE(CamxLogGroupCore,
"input buffers #%d, port %d, dim %d x %d wrapper %x, stream %x fenceRefCount %d",
i, pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].portId,
pChiStream->width, pChiStream->height, pChiStreamWrapper, pChiStream,
pRequest->pStreamBuffers[batchedFrameIndex].inputBufferInfo[i].fenceRefCount);
}
}
}

if (CamxResultSuccess == result)
{
/// @todo (CAMX-1797) Delete this
pRequest->pipelineIndex = pipelineIndex;

CAMX_LOG_VERBOSE(CamxLogGroupCore,
"Submit to pipeline index: %d / number of pipelines: %d batched index %d",
pRequest->pipelineIndex, m_numPipelines, m_batchedFrameIndex[pipelineIndex]);

CAMX_ASSERT(requests[requestIndex].numOutputs <= MaxOutputBuffers);

pRequest->pStreamBuffers[m_batchedFrameIndex[pipelineIndex]].numOutputBuffers =
requests[requestIndex].numOutputs;

for (UINT i = 0; i < requests[requestIndex].numOutputs; i++)
{
/// @todo (CAMX-1015): Avoid this memcpy.
Utils::Memcpy(&pRequest->pStreamBuffers[m_batchedFrameIndex[pipelineIndex]].outputBuffers[i],
&requests[requestIndex].pOutputBuffers[i],
sizeof(ChiStreamBuffer));
}

// Increment batch index only if batch mode is on
if (TRUE == m_isRequestBatchingOn)
{
m_batchedFrameIndex[pipelineIndex]++;
pRequest->numBatchedFrames = m_usecaseNumBatchedFrames;
pRequest->HALOutputBufferCombined = m_HALOutputBufferCombined;
}
else
{
m_batchedFrameIndex[pipelineIndex] = 0;
pRequest->numBatchedFrames = 1;
pRequest->HALOutputBufferCombined = FALSE;
}

}
}

if (CamxResultSuccess == result)
{
// Fill Color Metadata for output buffer
result = SetPerStreamColorMetadata(pCaptureRequest, pPerFrameInputPool,
m_requestBatchId[pipelineIndex]);
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Acquire fence failed for request");
}
}
else
{
CAMX_LOG_INFO(CamxLogGroupCore, "Session unable to process request because of device state");
}
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "PerFrame MetadataPool is NULL");
result = CamxResultEInvalidPointer;
}
}

// Update multi request sync data
if ((CamxResultSuccess == result) && (m_numInputSensors >= 2))
{
UpdateMultiRequestSyncData(pPipelineRequests);
}

if (CamxResultSuccess == result)
{
// Once we batch all the frames according to usecaseNumBatchedFrames we enqueue the capture request.
// For non-batch mode m_usecaseNumBatchedFrames is 1 so we enqueue every request. If batching is ON
// we enqueue the batched capture request only after m_usecaseBatchSize number of requests have been
// received
BOOL batchFrameReady = TRUE;
for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
UINT32 pipelineIndex = pipelineIndexes[requestIndex];

if (m_batchedFrameIndex[pipelineIndex] !=
GetBatchedHALOutputNum(m_usecaseNumBatchedFrames, m_HALOutputBufferCombined))
{
batchFrameReady = FALSE;
break; // batch frame number must be same for all the pipelines in same session
}
}

if ((FALSE == m_isRequestBatchingOn) || (TRUE == batchFrameReady))
{
result = m_pRequestQueue->EnqueueWait(&m_captureRequest);

if (CamxResultSuccess == result)
{
// Check for good conditions once more, if enqueue had to wait
for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
result = CanRequestProceed(&requests[requestIndex]);
if (CamxResultSuccess != result)
{
break;
}
}
}

if (CamxResultSuccess == result)
{
for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
const ChiCaptureRequest* pCaptureRequest = &(pPipelineRequests->pCaptureRequests[requestIndex]);
CAMX_LOG_CONFIG(CamxLogGroupCore,
"Pipeline: %s Added Sequence ID %lld CHI framenumber %lld to request queue and launched job "
"with request id %llu",
m_pipelineData[pipelineIndexes[requestIndex]].pPipeline->GetPipelineIdentifierString(),
requests[requestIndex].frameNumber, pCaptureRequest->frameNumber,
m_requestBatchId[pipelineIndexes[requestIndex]]);
}

VOID* pData[] = {this, NULL};
result = m_pThreadManager->PostJob(m_hJobFamilyHandle,
NULL,
&pData[0],
FALSE,
FALSE);
}
else
{
CAMX_LOG_WARN(CamxLogGroupCore, "Session unable to process request because of device state");
}

for (UINT requestIndex = 0; requestIndex < numRequests; requestIndex++)
{
m_batchedFrameIndex[pipelineIndexes[requestIndex]] = 0;
}
}
}

m_pStreamOnOffLock->Unlock();
m_pFlushLock->Unlock();
CAMX_ASSERT(CamxResultSuccess == result);

return result;
}
4.4.3.1 Session::ProcessRequest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// Session::ProcessRequest
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
CamxResult Session::ProcessRequest()
{
CamxResult result = CamxResultSuccess;
SessionCaptureRequest* pSessionRequest = NULL;

// This should only ever be called from threadpool, should never be reentrant, and nothing else grabs the request lock.
// If there is contention on this lock something very bad happened.
result = m_pRequestLock->TryLock();
if (CamxResultSuccess != result)
{
// Should never happen...return control back to the threadpool and this will eventually get called again
CAMX_LOG_ERROR(CamxLogGroupCore, "Could not grab m_pRequestLock...undefined behavior possible");

return CamxResultETryAgain;
}

// Initialize a result holder expected for the result coming out of this request
// This information will be used in the result notification path

pSessionRequest = static_cast<SessionCaptureRequest*>(m_pRequestQueue->Dequeue());

if (NULL != pSessionRequest)
{
// If session request contain multiple pipeline request, it means pipelines need to be sync
// and the batch frame number must be same.
UINT32 numBatchedFrames = pSessionRequest->requests[0].GetBatchedHALOutputNum(&pSessionRequest->requests[0]);
for (UINT requestIndex = 1; requestIndex < pSessionRequest->numRequests; requestIndex++)
{
if (numBatchedFrames !=
pSessionRequest->requests[requestIndex].GetBatchedHALOutputNum(&pSessionRequest->requests[requestIndex]))
{
CAMX_LOG_ERROR(CamxLogGroupCore,
"batch frame number are different in different pipline request");
m_pRequestLock->Unlock();
return CamxResultEInvalidArg;
}
}

const SettingsManager* pSettingManager = HwEnvironment::GetInstance()->GetSettingsManager();

if (TRUE == pSettingManager->GetStaticSettings()->dynamicPropertiesEnabled)
{
// NOWHINE CP036a: We're actually poking into updating the settings dynamically so we do want to do this
const_cast<SettingsManager*>(pSettingManager)->UpdateOverrideProperties();
}

LightweightDoublyLinkedListNode** ppResultNodes = NULL;
SessionResultHolder** ppSessionResultHolder = NULL;

for (UINT requestIndex = 0; requestIndex < pSessionRequest->numRequests; requestIndex++)
{
CaptureRequest& rRequest = pSessionRequest->requests[requestIndex];
CAMX_ASSERT(rRequest.numBatchedFrames > 0);

if (NULL == ppResultNodes)
{
ppResultNodes = reinterpret_cast<LightweightDoublyLinkedListNode**>(
CAMX_CALLOC(numBatchedFrames * sizeof(LightweightDoublyLinkedListNode*)));

if (NULL == ppResultNodes)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "memory allocation failed for ppResultNodes for request %llu",
rRequest.requestId);
result = CamxResultENoMemory;
break;
}
}

if (NULL == ppSessionResultHolder)
{
ppSessionResultHolder = reinterpret_cast<SessionResultHolder**>(
CAMX_CALLOC(numBatchedFrames * sizeof(SessionResultHolder*)));
if (NULL == ppSessionResultHolder)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "memory allocation failed for ppSessionResultHolder for request %llu",
rRequest.requestId);
result = CamxResultENoMemory;
break;
}
}

if ((NULL != ppResultNodes) && (NULL != ppSessionResultHolder))
{
// Add sequence id to framework frame number mapping after CheckRequestProcessingRate() = TRUE
// This is to make sure new process request do not override old result has not sent to framework yet.
for (UINT32 batchIndex = 0; batchIndex < rRequest.GetBatchedHALOutputNum(&rRequest); batchIndex++)
{
Pipeline* pPipeline = m_pipelineData[rRequest.pipelineIndex].pPipeline;
StreamBufferInfo& rStreamBuffer = rRequest.pStreamBuffers[batchIndex];
UINT64 chiFrameNumber = rStreamBuffer.originalFrameworkNumber;
UINT64 requestId = rRequest.requestId;
UINT32 sequenceId = rStreamBuffer.sequenceId;
UINT64 CSLSyncID = rRequest.CSLSyncID;
auto hPipeline = pPipeline->GetPipelineDescriptor();
m_fwFrameNumberMap[sequenceId % MaxQueueDepth] = chiFrameNumber;

CAMX_LOG_REQMAP(CamxLogGroupCore,
"chiFrameNum: %llu <==> requestId: %llu <==> sequenceId: %u <==> CSLSyncId: %llu -- %s",
chiFrameNumber, requestId, sequenceId, CSLSyncID,
pPipeline->GetPipelineIdentifierString());

BINARY_LOG(LogEvent::ReqMap_CamXInfo, chiFrameNumber, requestId, sequenceId,
CSLSyncID, hPipeline, this);
}

for (UINT batchIndex = 0; batchIndex < rRequest.GetBatchedHALOutputNum(&rRequest); batchIndex++)
{
UINT32 sequenceId = rRequest.pStreamBuffers[batchIndex].sequenceId;

CAMX_TRACE_MESSAGE_F(CamxLogGroupCore, "ProcessRequest: RequestId: %llu sequenceId: %u",
rRequest.requestId, sequenceId);

LightweightDoublyLinkedListNode* pNode = ppResultNodes[batchIndex];
if (NULL == pNode)
{
pNode = reinterpret_cast<LightweightDoublyLinkedListNode*>
(CAMX_CALLOC(sizeof(LightweightDoublyLinkedListNode)));
ppResultNodes[batchIndex] = pNode;
}

SessionResultHolder* pSessionResultHolder = ppSessionResultHolder[batchIndex];
if (NULL == pSessionResultHolder)
{
pSessionResultHolder = reinterpret_cast<SessionResultHolder*>
(CAMX_CALLOC(sizeof(SessionResultHolder)));
ppSessionResultHolder[batchIndex] = pSessionResultHolder;
}

if ((NULL == pNode) ||
(NULL == pSessionResultHolder))
{
CAMX_LOG_ERROR(CamxLogGroupCore, "Out of memory pNode=%p pSessionResultHolder=%p",
pNode, pSessionResultHolder);
result = CamxResultENoMemory;

if (NULL != pNode)
{
CAMX_FREE(pNode);
pNode = NULL;
}

if (NULL != pSessionResultHolder)
{
CAMX_FREE(pSessionResultHolder);
pSessionResultHolder = NULL;
}
}

if (CamxResultSuccess == result)
{
ResultHolder* pHolder = &(pSessionResultHolder->resultHolders[requestIndex]);
Utils::Memset(pHolder, 0x0, sizeof(ResultHolder));
pHolder->sequenceId = sequenceId;
pHolder->numOutBuffers = rRequest.pStreamBuffers[batchIndex].numOutputBuffers;
pHolder->numInBuffers = rRequest.pStreamBuffers[batchIndex].numInputBuffers;
pHolder->pendingMetadataCount = m_numMetadataResults;
pHolder->pPrivData = rRequest.pPrivData;
pHolder->requestId = static_cast<UINT32>(rRequest.requestId);
pHolder->expectedExposureTime = static_cast<UINT32>(rRequest.expectedExposureTime);

// We may not get a result metadata for reprocess requests
// This logic may need to be expanded for multi-camera CHI override scenarios,
// as to designate what pipelines are exactly offline
if (rRequest.pipelineIndex > 0)
{
pHolder->tentativeMetadata = TRUE;
}

for (UINT32 buffer = 0; buffer < pHolder->numOutBuffers; buffer++)
{
UINT32 streamIndex = GetStreamIndex(reinterpret_cast<ChiStream*>(
rRequest.pStreamBuffers[batchIndex].outputBuffers[buffer].pStream));

if (streamIndex < MaxNumOutputBuffers)
{
pHolder->bufferHolder[streamIndex].pBuffer = GetResultStreamBuffer();

Utils::Memcpy(pHolder->bufferHolder[streamIndex].pBuffer,
&(rRequest.pStreamBuffers[batchIndex].outputBuffers[buffer]),
sizeof(ChiStreamBuffer));

pHolder->bufferHolder[streamIndex].valid = FALSE;

pHolder->bufferHolder[streamIndex].pStream = reinterpret_cast<ChiStream*>(
rRequest.pStreamBuffers[batchIndex].outputBuffers[buffer].pStream);

ChiStreamWrapper* pChiStreamWrapper = static_cast<ChiStreamWrapper*>(
rRequest.pStreamBuffers[batchIndex].outputBuffers[buffer].pStream->pPrivateInfo);

pChiStreamWrapper->AddEnabledInFrame(rRequest.pStreamBuffers[batchIndex].sequenceId);
}
else
{
CAMX_LOG_ERROR(CamxLogGroupCore, "stream index = %d < MaxNumOutputBuffers = %d",
streamIndex, MaxNumOutputBuffers);
}
}

// Create internal private input buffer fences and relase them (below), so that
// input fence trigger mechanism would work same way that when the input fences
// released from previous/parent node output buffer

for (UINT32 buffer = 0; buffer < pHolder->numInBuffers; buffer++)
{
StreamInputBufferInfo* pInputBufferInfo = rRequest.pStreamBuffers[batchIndex].inputBufferInfo;
ChiStreamBuffer* pInputBuffer = &(pInputBufferInfo[buffer].inputBuffer);
CSLFence* phCSLFence = NULL;
CHIFENCEHANDLE* phAcquireFence = NULL;
UINT32 streamIndex = 0;
ChiStream* pInputBufferStream = reinterpret_cast<ChiStream*>(pInputBuffer->pStream);

/// @todo (CAMX-1797) Kernel currently requires us to pass a fence always even if we dont need it.
/// Fix that and also need to handle input fence mechanism
phCSLFence = &(pInputBufferInfo[buffer].fence);

if ((TRUE == pInputBuffer->acquireFence.valid) &&
(ChiFenceTypeInternal == pInputBuffer->acquireFence.type))
{
phAcquireFence = &(pInputBuffer->acquireFence.hChiFence);
}

if (FALSE == IsValidCHIFence(phAcquireFence))
{
result = CSLCreatePrivateFence("InputBufferFence_session", phCSLFence);
CAMX_ASSERT(CamxResultSuccess == result);

if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "process request failed : result %d", result);
break;
}
else
{
CAMX_LOG_VERBOSE(CamxLogGroupCore, "CSLCreatePrivateFence:%d Used", *phCSLFence);
}
pInputBufferInfo[buffer].isChiFence = FALSE;
}
else
{
*phCSLFence = reinterpret_cast<ChiFence*>(*phAcquireFence)->hFence;
pInputBufferInfo[buffer].isChiFence = TRUE;
CAMX_LOG_VERBOSE(CamxLogGroupCore, "AcquireFence:%d Used", *phCSLFence);
}

// While CamX <=> CHI allows for more than 1 Input Buffer per request,
// The CHI <=> HAL <=> App Framework layers do not.
// The input buffer holder only can only properly represent Camera3 Input Streams
// The maximum number of Camera3 input buffers supported is 1
CAMX_STATIC_ASSERT(MaxNumInputBuffers == 1);
CAMX_STATIC_ASSERT(CAMX_ARRAY_SIZE(pHolder->inputbufferHolder) == 1);
streamIndex = 0;
if (streamIndex < MaxNumInputBuffers)
{
ChiStreamBuffer* pChiResultStreamBuffer = GetResultStreamBuffer();

Utils::Memcpy(pChiResultStreamBuffer, pInputBuffer, sizeof(ChiStreamBuffer));
pHolder->inputbufferHolder[streamIndex].pBuffer = pChiResultStreamBuffer;
pHolder->inputbufferHolder[streamIndex].pStream = pInputBufferStream;
}
}
}
else
{
break;
}
}
}
}

if ((NULL != ppResultNodes) && (NULL != ppSessionResultHolder))
{
// Lets start accounting for this request's exposure time
UINT32 totalResultExposureTime = 0;

// Now add the result holder to the linked list
for (UINT batchIndex = 0; batchIndex < numBatchedFrames; batchIndex++)
{
LightweightDoublyLinkedListNode* pNode = ppResultNodes[batchIndex];
SessionResultHolder* pSessionResultHolder = ppSessionResultHolder[batchIndex];
pSessionResultHolder->numResults = pSessionRequest->numRequests;
pNode->pData = pSessionResultHolder;
m_pResultHolderListLock->Lock();
m_resultHolderList.InsertToTail(pNode);
m_pResultHolderListLock->Unlock();

for (UINT idx = 0; idx < pSessionResultHolder->numResults; idx++)
{
totalResultExposureTime += pSessionResultHolder->resultHolders[idx].expectedExposureTime;
}
}

UINT32 totalExposureTime = CamxAtomicIncU32(&m_aTotalLongExposureTimeout, totalResultExposureTime);
CAMX_LOG_VERBOSE(CamxLogGroupCore,
"Session %p - exposureTimeout after accounting for %u requests starting with requestID %llu = %u",
this,
pSessionRequest->numRequests,
pSessionRequest->requests->requestId,
totalExposureTime);
}

// De-acllocate the array of ppResultNodes and ppSessionResultHolder.
// The actual node and session result holder will be free in processResult
if (NULL != ppResultNodes)
{
CAMX_FREE(ppResultNodes);
ppResultNodes = NULL;
}
if (NULL != ppSessionResultHolder)
{
CAMX_FREE(ppSessionResultHolder);
ppSessionResultHolder = NULL;
}
}
m_pRequestLock->Unlock();

if (NULL != pSessionRequest)
{
BOOL isSyncMode = TRUE;

if ((pSessionRequest->numRequests <= 1) ||
(CSLSyncLinkModeNoSync == m_linkSyncMode))
{
isSyncMode = FALSE;
}

for (UINT requestIndex = 0; requestIndex < pSessionRequest->numRequests; requestIndex++)
{
CaptureRequest& rRequest = pSessionRequest->requests[requestIndex];

result = m_pipelineData[rRequest.pipelineIndex].pPipeline->OpenRequest(rRequest.requestId,
rRequest.CSLSyncID, isSyncMode, rRequest.expectedExposureTime);

CAMX_LOG_INFO(CamxLogGroupCore,
"pipeline[%d] OpenRequest called for request id = %llu withCSLSyncID %llu",
rRequest.pipelineIndex,
rRequest.requestId,
rRequest.CSLSyncID);

if ((CamxResultSuccess != result) && (CamxResultECancelledRequest != result))
{
CAMX_LOG_ERROR(CamxLogGroupCore,
"pipeline[%d] OpenRequest failed for request id = %llu withCSLSyncID %llu result = %s",
rRequest.pipelineIndex,
rRequest.requestId,
rRequest.CSLSyncID,
Utils::CamxResultToString(result));
}
else if (CamxResultECancelledRequest == result)
{
CAMX_LOG_INFO(CamxLogGroupCore, "Session: %p is in Flush state, Canceling OpenRequest for pipeline[%d] "
"for request id = %llu", this, rRequest.pipelineIndex, rRequest.requestId);

result = CamxResultSuccess;
}
}

if (CamxResultSuccess == result)
{
for (UINT requestIndex = 0; requestIndex < pSessionRequest->numRequests; requestIndex++)
{
CaptureRequest& rRequest = pSessionRequest->requests[requestIndex];
PipelineProcessRequestData pipelineProcessRequestData = {};

result = SetupRequestData(&rRequest, &pipelineProcessRequestData);

// Set timestamp for start of request processing
PopulateSessionRequestTimingBuffer(&rRequest);

if (CamxResultSuccess == result)
{
result = m_pipelineData[rRequest.pipelineIndex].pPipeline->ProcessRequest(&pipelineProcessRequestData);
}

if (CamxResultSuccess != result)
{
CAMX_LOG_ERROR(CamxLogGroupCore, "pipeline[%u] ProcessRequest failed for request %llu - %s",
rRequest.pipelineIndex,
rRequest.requestId,
Utils::CamxResultToString(result));
}

if (NULL != pipelineProcessRequestData.pPerBatchedFrameInfo)
{
CAMX_FREE(pipelineProcessRequestData.pPerBatchedFrameInfo);
pipelineProcessRequestData.pPerBatchedFrameInfo = NULL;
}
}
}
m_pRequestQueue->Release(pSessionRequest);
}
return result;
}

针对每一次的Request的流转,都是以该方法为入口开始的,具体流程见下图:

上述流程可以总结为以下几个步骤:

  1. 通过调用Session的ProcessCaptureRequest方法进入到Session,然后调用Pipeline中的ProcessRequest方法通知Pipeline开始处理此次Request。
  2. 在Pipeline中,会先去调用内部的每一个Node的SetupRequest方法分别设置该Node的Output Port以及Input Port,之后通过调用DRQ(DeferredRequestQueue)的AddDeferredNode方法将所有的Node加入到DRQ中,其中DRQ中有两个队列分别是用于保存没有依赖项的Node的m_readyNodes以及保存处于等待依赖关系满足的Node的m_deferredNodes,当调用DRQ的DispatchReadyNodes方法后,会开始从m_readyNodes队列中取出Node调用其ProcessRequest开始进入Node内部处理本次request,在处理过程中会更新meta data数据,并更新至DRQ中,当该Node处理完成之后,会将处于m_deferredNodes中的已无依赖关系的Node移到m_readyNodes中,并再次调用DispatchReadyNodes方法从m_readyNodes取出Node进行处理。
  3. 与此过程中,当Node的数据处理完成之后会通过CSLFenceCallback通知到Pipeline,此时Pipeline会判断当前Node的Output port 是否是Sink Port(输出到CHI),如果不是,则会更新依赖项到DRQ中,并且将不存在依赖项的Node移到m_readyNodes队列中,然后调用DispatchReadyNdoes继续进入到DRQ中流转,如果是Sink Port,则表示此Node是整个Pipeline的最末端,调用sinkPortFenceSignaled将数据给到Session中,最后通过调用Session中的NotifyResult将结果发送到CHI中。
4.4.3.2 DeferredRequestQueue

上述流程里面中涉及到DeferredRequestQueue这个概念,这里简单介绍下:

DeferredRequestQueue继承于IPropertyPoolObserver,实现了OnPropertyUpdate/OnMetadataUpdate/OnPropertyFailure/OnMetadataFailure接口,这几个接口用于接收Meta Data以及Property的更新,另外,DRQ主要包含了以下几个主要方法:

  • Create()

该方法用于创建DRQ,其中创建了用于存储依赖信息的m_pDependencyMap,并将自己注册到MetadataPool中,一旦有meta data或者property更新便会通过类中实现的几个接口通知到DRQ。

  • DispatchReadyNodes()

该方法主要用于将处于m_readyNodes队列的Node取出,将其投递到m_hDeferredWorker线程中进行处理。

  • AddDeferredNode()

该方法主要用于添加依赖项到m_pDependencyMap中。

  • FenceSignaledCallback()

当Node内部针对某次request处理完成之后,会通过一系列回调通知到DRQ,而其调用的方法便是该方法,在该方法中,会首先调用UpdateDependency更新依赖项,然后调用DispatchReadyNodes触发开始对处于ready状态的Node开始进行处理

  • OnPropertyUpdate()

该方法是定义于IPropertyPoolObserver接口,DRQ实现了它,主要用于接收Property更新的通知,并在内部调用UpdateDependency更新依赖项。

  • OnMetadataUpdate()

该方法是定义于IPropertyPoolObserver接口,DRQ实现了它,主要用于接收Meta data更新的通知,并在内部调用UpdateDependency更新依赖项。

  • UpdateDependency()

该方法用于更新Node的依赖项信息,并且将没有依赖的Node从m_deferredNodes队列中移到m_readyNodes,这样该Node就可以在之后的某次DispatchReadyNodes调用之后投入运行。

  • DeferredWorkerWrapper()

该方法是m_hDeferredWorker线程的处理函数,主要用于处理需要下发request的Node,同时再次更新依赖项,最后会再次调用DispatchReadyNodes开始处理。

其中需要注意的是,Pipeline首次针对每一个Node通过调用AddDeferredNode方法加入到DRQ中,此时所有的Node都会加入到m_readyNodes中,然后通过调用dispatchReadyNodes方法,触发DRQ开始进行整个内部处理流程,基本流程可以参见下图,接下来就以该图进行深入梳理下:

  1. 当调用了DRQ的dispatchReadyNodes方法后,会从m_readyNodes链表里面依次取出Dependency,将其投递到DeferredWorkerWrapper线程中,在该线程会从Dependency取出Node调用其ProcessRequest方法开始在Node内部处理本次request,处理完成之后如果当前Node依然存在依赖项,则调用AddDeferredNode方法将Node再次加入到m_deferredNodes链表中,并且加入新的依赖项,存入m_pDependencyMap hash表中。
  2. 在Node处理request的过程中,会持续更新meta data以及property,此时会通过调用MetadataSlot的PublishMetadata方法更新到MetadataPool中,此时MetadataPool会调用之前在DRQ初始化时候注册的几个回调方法OnPropertyUpdate以及OnMetadataUpdate方法通知DRQ,此时有新的meta data 和property更新,接下来会在这两个方法中调用UpdateDependency方法,去更新meta data 和property到m_pDependencyMap中,并且将没有任何依赖项的Node从m_deferredNodes取出加入到m_readyNodes,等待处理。
  3. 与此同时,Node的处理结果也会通过ProcessFenceCallback方法通知pipeline,并且调用pipeline的NonSinkPortFenceSignaled方法,在该方法内部又会去调用DRQ的FenceSignaledCallback方法,而该方法又会调用UpdateDependency更新依赖,并将依赖项都满足的Node从m_deferredNodes取出加入到m_readyNodes,然后调用dispatchReadyNodes继续进行处理。

4.5 上传拍照结果

在用户开启了相机应用,相机框架收到某次Request请求之后会开始对其进行处理,一旦有图像数据产生便会通过层层回调最终返回到应用层进行显示,这里我们针对CamX-CHI部分对于拍照结果的上传流程进行一个简单的梳理:

每一个Request对应了三个Result,分别是partial metadata、metadata以及image data,对于每一个Result,上传过程可以大致分为以下两个阶段:

  • Session内部完成图像数据的处理,将结果发送至Usecase中
  • Usecase接收到来自Session的数据,并将其上传至Provider

首先来看下Session内部完成图像数据的处理后是如何将结果发送至Usecase的:

在整个requets流转的过程中,一旦Node中有Partial Meta Data产生,便会调用Node的ProcessPartialMetadataDone方法去通知从属的Pipeline,其内部又调用了pipeline的NotifyNodePartialMetadataDone方法。每次调用Pipeline的NotifyNodePartialMetadataDone方法都会去将pPerRequestInfo→numNodesPartialMetadataDone加一并且判断当前值是否等于pipeline中的Node数量,一旦相等,便说明当前所有的Node都完成了partial meta data的更新动作,此时,便会调用ProcessPartialMetadataRequestIdDone方法,里面会去取出partial meta data,并且重新封装成ResultsData结构体,将其作为参数通过Session的NotifyResult方法传入Session中,之后在Session中经过层层调用最终会调用到内部成员变量m_chiCallBacks的ChiProcessPartialCaptureResult方法,该方法正是创建Session的时候,传入Session中的Usecase的方法(AdvancedCameraUsecase::ProcessDriverPartialCaptureResultCb),通过该方法就将meta data返回到了CHI中。

同样地,Meta data的逻辑和Partial Meta Data很相似,每个Node在处理request的过程中,会调用ProcessMetadataDone方法将数据发送到Pipeline中,一旦所有的Node的meta data否发送完成了,pipeline会调用NotifyNodeMetadataDone方法,将最终的结果发送至Session中,最后经过层层调用,会调用Session 中成员变量m_chiCallBacks的ChiProcessCaptureResult方法,将结果发送到CHI中Usecase中。

图像数据的流转和前两个meta data的流转有点儿差异,一旦Node内部图像数据处理完成后便会调用其ProcessFenceCallback方法,在该方法中会去检查当前输出是否是SInk Buffer,如果是则会调用Pipeline的SinkPortFenceSignaled方法将数据发送到Pipeline中,在该方法中Pipeline又会将数据发送至Session中,最后经过层层调用,会调用Session 中成员变量m_chiCallBacks的ChiProcessCaptureResult方法,将结果发送到CHI中Usecase中。

接下来我们来看下一旦Usecase接收到Session的数据,是如何发送至Provider的:

4.5.1 ProcessResult

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// AdvancedCameraUsecase::ProcessResult
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID AdvancedCameraUsecase::ProcessResult(
CHICAPTURERESULT* pResult,
VOID* pPrivateCallbackData)
{
if (TRUE == AdvancedFeatureEnabled())
{
SessionPrivateData* pSessionPrivateData = static_cast<SessionPrivateData*>(pPrivateCallbackData);
UINT32 sessionId = pSessionPrivateData->sessionId;

if ((NULL != pResult->pOutputMetadata) && (sessionId == m_realtimeSessionId))
{
ParseResultMetadata(m_pMetadataManager->GetMetadataFromHandle(pResult->pOutputMetadata));
}

m_pResultMutex->Lock();

Feature* pFeature = FindFeatureToProcessResult(static_cast<CHIPRIVDATA*>(pResult->pPrivData),
pResult->frameworkFrameNum,
pPrivateCallbackData);
if (NULL != pFeature)
{
pFeature->ProcessResult(pResult, pPrivateCallbackData);
}
else
{
CHX_LOG_ERROR("pFeature is NULL.");
}

m_pResultMutex->Unlock();
}
else
{
m_pResultMutex->Lock();
CameraUsecaseBase::SessionCbCaptureResult(pResult, pPrivateCallbackData);
m_pResultMutex->Unlock();
}

if (2 <= ExtensionModule::GetInstance()->EnableDumpDebugData())
{
// Process debug-data
ProcessDebugData(pResult, pPrivateCallbackData, pResult->frameworkFrameNum);
}
}

4.5.2 ProcessDriverPartialCaptureResult

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// AdvancedCameraUsecase::ProcessDriverPartialCaptureResult
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID AdvancedCameraUsecase::ProcessDriverPartialCaptureResult(
CHIPARTIALCAPTURERESULT* pResult,
VOID* pPrivateCallbackData)
{
if (TRUE == AdvancedFeatureEnabled())
{
SessionPrivateData* pSessionPrivateData = static_cast<SessionPrivateData*>(pPrivateCallbackData);
UINT32 sessionId = pSessionPrivateData->sessionId;

if ((NULL != pResult->pPartialResultMetadata) && (sessionId == m_realtimeSessionId))
{
ParseResultMetadata(m_pMetadataManager->GetMetadataFromHandle(pResult->pPartialResultMetadata));
}

m_pResultMutex->Lock();

Feature* pFeature = FindFeatureToProcessResult(static_cast<CHIPRIVDATA*>(pResult->pPrivData),
pResult->frameworkFrameNum,
pPrivateCallbackData);
if (NULL != pFeature)
{
if (PartialMetaSupport::CombinedPartialMeta ==ExtensionModule::GetInstance()->EnableCHIPartialData())
{
pFeature->ProcessCHIPartialData(pResult->frameworkFrameNum, sessionId);
}
pFeature->ProcessDriverPartialCaptureResult(pResult, pPrivateCallbackData);
}
else
{
CHX_LOG_ERROR("pFeature is NULL.");
}

m_pResultMutex->Unlock();
}
else
{
CameraUsecaseBase::SessionCbPartialCaptureResult(pResult, pPrivateCallbackData);
}
}

4.5.3 SessionCbCaptureResult

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// CameraUsecaseBase::SessionCbCaptureResult
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID CameraUsecaseBase::SessionCbCaptureResult(
ChiCaptureResult* pCaptureResult,
VOID* pPrivateCallbackData)
{
SessionPrivateData* pSessionPrivateData = static_cast<SessionPrivateData*>(pPrivateCallbackData);
CameraUsecaseBase* pCameraUsecase = static_cast<CameraUsecaseBase*>(pSessionPrivateData->pUsecase);

for (UINT stream = 0; stream < pCaptureResult->numOutputBuffers; stream++)
{
UINT index = 0;
if (TRUE == IsThisClonedStream(pCameraUsecase->m_pClonedStream, pCaptureResult->pOutputBuffers[stream].pStream, &index))
{
CHISTREAMBUFFER* pTempStreamBuffer = const_cast<CHISTREAMBUFFER*>(&pCaptureResult->pOutputBuffers[stream]);
pTempStreamBuffer->pStream = pCameraUsecase->m_pFrameworkOutStreams[index];
}
}

pCameraUsecase->SessionProcessResult(pCaptureResult, pSessionPrivateData);
}

4.5.4 SessionProcessResult

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91

////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// CameraUsecaseBase::SessionProcessResult
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID CameraUsecaseBase::SessionProcessResult(
ChiCaptureResult* pResult,
const SessionPrivateData* pSessionPrivateData)
{
CHX_LOG("CameraUsecaseBase::SessionProcessResult for frame: %u", pResult->frameworkFrameNum);

CDKResult result = CDKResultSuccess;
UINT32 resultFrameNum = pResult->frameworkFrameNum;
UINT32 resultFrameIndex = resultFrameNum % MaxOutstandingRequests;
BOOL isAppResultsAvailable = FALSE;

camera3_capture_result_t* pUsecaseResult = GetCaptureResult(resultFrameIndex);

pUsecaseResult->frame_number = resultFrameNum;

// Fill all the info in m_captureResult and call ProcessAndReturnFinishedResults to send the meta
// callback in sequence
m_pAppResultMutex->Lock();
for (UINT i = 0; i < pResult->numOutputBuffers; i++)
{
camera3_stream_buffer_t* pResultBuffer =
const_cast<camera3_stream_buffer_t*>(&pUsecaseResult->output_buffers[i + pUsecaseResult->num_output_buffers]);

ChxUtils::PopulateChiToHALStreamBuffer(&pResult->pOutputBuffers[i], pResultBuffer);
isAppResultsAvailable = TRUE;
}
pUsecaseResult->num_output_buffers += pResult->numOutputBuffers;
m_pAppResultMutex->Unlock();

if (NULL != &pResult->pInputBuffer[0])
{
camera3_stream_buffer_t* pResultInBuffer =
const_cast<camera3_stream_buffer_t*>(pUsecaseResult->input_buffer);

ChxUtils::PopulateChiToHALStreamBuffer(pResult->pInputBuffer, pResultInBuffer);
isAppResultsAvailable = TRUE;
}

if ((NULL != pResult->pInputMetadata) && (NULL != pResult->pOutputMetadata))
{

ChiMetadata* pChiInputMetadata = m_pMetadataManager->GetMetadataFromHandle(pResult->pInputMetadata);
ChiMetadata* pChiOutputMetadata = m_pMetadataManager->GetMetadataFromHandle(pResult->pOutputMetadata);

if ((pResult->frameworkFrameNum >= m_batchRequestStartIndex) &&
(pResult->frameworkFrameNum <= m_batchRequestEndIndex))
{
result = HandleBatchModeResult(pResult,
pChiOutputMetadata,
resultFrameIndex,
GetMetadataClientIdFromPipeline(pSessionPrivateData->sessionId, 0));
}
else
{
if ((CDKResultSuccess == result) && (FALSE == m_isMetadataAvailable[resultFrameIndex]) &&
(FALSE == m_isMetadataSent[resultFrameIndex]))
{
result = Usecase::UpdateAppResultMetadata(pChiOutputMetadata,
resultFrameIndex,
GetMetadataClientIdFromPipeline(pSessionPrivateData->sessionId, 0));
if (CDKResultSuccess == result)
{
SetMetadataAvailable(resultFrameIndex);
}
}
}

if (CDKResultSuccess == result)
{
// release the buffers
m_pMetadataManager->Release(pChiOutputMetadata);

m_pMetadataManager->Release(pChiInputMetadata);

CHX_LOG("Released output metadata buffer for session id: %d: %d",
pSessionPrivateData->sessionId, result);

pUsecaseResult->partial_result = pResult->numPartialMetadata;
isAppResultsAvailable = TRUE;
}
}

if (TRUE == isAppResultsAvailable)
{
ProcessAndReturnFinishedResults();
}
}

4.5.5 Usecase::ReturnFrameworkResult

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
/// Usecase::ReturnFrameworkResult
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID Usecase::ReturnFrameworkResult(
const camera3_capture_result_t* pResult,
UINT32 cameraID)
{
camera3_capture_result_t* pOverrideResult = const_cast<camera3_capture_result_t*>(pResult);
UINT32 chiOriginalOverrideFrameNum = pResult->frame_number;
UINT32 resultFrameIndexChi = chiOriginalOverrideFrameNum % MaxOutstandingRequests;
BOOL metadataResult = TRUE;
BOOL resultCanBeSent = TRUE;
BOOL allBuffersReturned = FALSE;

pOverrideResult->frame_number = GetAppFrameNum(pResult->frame_number);

m_pMapLock->Lock();
CHX_LOG_INFO("chiOriginalOverrideFrameNum: %d frame_number: %d resultFrameIndexF: %d FW: %d, Buffer Count: %d RESULT: %p",
chiOriginalOverrideFrameNum,
pOverrideResult->frame_number,
resultFrameIndexChi,
m_captureResult[resultFrameIndexChi].frame_number,
m_numberOfPendingOutputBuffers[resultFrameIndexChi],
pResult->result);

if ((NULL != pResult->result) &&
(NULL != m_pLogicalCameraInfo) &&
(TRUE == UsecaseSelector::IsQuadCFASensor(m_pLogicalCameraInfo, NULL)) &&
(FALSE == ExtensionModule::GetInstance()->ExposeFullsizeForQuadCFA()))
{
// map ROIs (aec/af/crop region) from full active array size based to binning active arrsy size based
OverrideResultMetaForQCFA(const_cast<camera_metadata_t*>(pResult->result));
}

if (chiOriginalOverrideFrameNum != m_captureResult[resultFrameIndexChi].frame_number)
{
CHX_LOG_ERROR("Unexpected Frame Number %u", chiOriginalOverrideFrameNum);
resultCanBeSent = FALSE;
}

camera3_capture_request_t* pRequest = &m_pendingPCRs[resultFrameIndexChi];
if (0 != m_numberOfPendingOutputBuffers[resultFrameIndexChi])
{
// Set NULL to returned buffer so that it won't be returned again in flush call
CHX_LOG("pResult->num_output_buffers %d pending buffers %d",
pResult->num_output_buffers,
m_numberOfPendingOutputBuffers[resultFrameIndexChi]);
for (UINT resultIdx = 0; resultIdx < pResult->num_output_buffers; resultIdx++)
{
camera3_stream_buffer_t* pStreamBuffer = NULL;
for (UINT requestIdx = 0; requestIdx < pRequest->num_output_buffers; requestIdx++)
{
pStreamBuffer = const_cast<camera3_stream_buffer_t*>(&pRequest->output_buffers[requestIdx]);
if (pResult->output_buffers[resultIdx].stream == pRequest->output_buffers[requestIdx].stream)
{
CHX_LOG("pStreamBuffer %p, i %d, j %d buffer %p",
pStreamBuffer,
resultIdx,
requestIdx,
pStreamBuffer->buffer);
pStreamBuffer->buffer = NULL;
break;
}
}
}
}
else
{
allBuffersReturned = TRUE;
}

if (NULL != pResult->input_buffer)
{
pRequest->input_buffer = NULL;
allBuffersReturned = FALSE;
}

// Decrement the number of pending output buffers given the desired result to return
if (m_numberOfPendingOutputBuffers[resultFrameIndexChi] >= pResult->num_output_buffers)
{
m_numberOfPendingOutputBuffers[resultFrameIndexChi] -= pResult->num_output_buffers;
}
else
{
resultCanBeSent = FALSE;

CHX_LOG_ERROR("ChiFrame: %d App Frame: %d - "
"pResult contains more buffers (%d) than the expected number of buffers (%d) to return to the framework!",
chiOriginalOverrideFrameNum,
pOverrideResult->frame_number,
pResult->num_output_buffers,
m_numberOfPendingOutputBuffers[resultFrameIndexChi]);
}

CHX_LOG("m_numberOfPendingOutputBuffers = %d", m_numberOfPendingOutputBuffers[resultFrameIndexChi]);

BOOL metadataAvailable = ((NULL != pOverrideResult->result) &&
(0 != pOverrideResult->partial_result) && (pOverrideResult->partial_result < ExtensionModule::GetInstance()->GetNumMetadataResults()) ) ? TRUE : FALSE;


// Block AFRegions in CHI level for fixed-focus lens. OEM can customize this based on need.
// AFRegion is always published in CamX driver
const LogicalCameraInfo *pLogicalCameraInfo = NULL;
pLogicalCameraInfo = ExtensionModule::GetInstance()->GetPhysicalCameraInfo(cameraID);
if ((NULL != pLogicalCameraInfo) && (NULL != pOverrideResult->result))
{
const CHICAMERAINFO* pChiCameraInfo = &(pLogicalCameraInfo->m_cameraCaps);
if (TRUE == pChiCameraInfo->lensCaps.isFixedFocus)
{
camera_metadata_entry_t metadata_entry;
INT findResult = find_camera_metadata_entry(
const_cast<camera_metadata_t*>(pOverrideResult->result), ANDROID_CONTROL_AF_REGIONS, &metadata_entry);
if (0 == findResult) //OK
{
delete_camera_metadata_entry(const_cast<camera_metadata_t*>(pOverrideResult->result), metadata_entry.index);
}
}
}


PartialResultCount partialResultCount =
static_cast<PartialResultCount>(pOverrideResult->partial_result);
MetaDataResultCount totalMetaDataCount =
static_cast<MetaDataResultCount>(ExtensionModule::GetInstance()->GetNumMetadataResults());

// check if this result is only for partial metadata
if ((0 < static_cast<UINT8>(partialResultCount)) &&
(static_cast<UINT8>(partialResultCount) < static_cast<UINT8>(totalMetaDataCount)))
{
// Check if final metadata has already been sent
if (TRUE == m_requestFlags[resultFrameIndexChi].isOutputMetaDataSent)
{
resultCanBeSent = FALSE;
CHX_LOG_WARN("Attempting to Send Partial Metadata after Final Metadata has been sent for Chi Frame: %u FW Frame: %u",
chiOriginalOverrideFrameNum,
pResult->frame_number);

if (pOverrideResult->num_output_buffers > 0)
{
CHX_LOG_ERROR("Partial Metadata sent with buffers after Metadata is sent for Chi Frame: %u FW Frame: %u",
chiOriginalOverrideFrameNum,
pResult->frame_number);
}
}
}


BOOL metadataErrorSent = m_requestFlags[resultFrameIndexChi].isMetadataErrorSent;
BOOL allMetadataReturned = m_requestFlags[resultFrameIndexChi].isOutputMetaDataSent;

if ((TRUE == allBuffersReturned) &&
((TRUE == metadataErrorSent) ||
(TRUE == allMetadataReturned)))
{
CHX_LOG_WARN("Result not returned - framework does not need more results for this request: "
"allBuffersReturned %d, metadataErrorSent: %d, allMetadataReturned: %d ",
allBuffersReturned, metadataErrorSent, allMetadataReturned);
resultCanBeSent = FALSE;
}

if ((FALSE == m_requestFlags[resultFrameIndexChi].isInErrorState) && (resultCanBeSent == TRUE))
{
metadataResult = HandleMetadataResultReturn(pOverrideResult, pResult->frame_number, resultFrameIndexChi, cameraID);
ExtensionModule::GetInstance()->ReturnFrameworkResult(pResult, cameraID);
}
else
{
CHX_LOG_WARN("Cannot return results for Chi Frame: %u FW Frame: %u",
chiOriginalOverrideFrameNum,
pResult->frame_number);
}
m_pMapLock->Unlock();
pOverrideResult->frame_number = chiOriginalOverrideFrameNum;
}

4.5.6 ExtensionModule::ReturnFrameworkResult

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
// ExtensionModule::ReturnFrameworkResult
////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////
VOID ExtensionModule::ReturnFrameworkResult(
const camera3_capture_result_t* pResult,
UINT32 cameraID)
{
if ((NULL != m_pPerfLockManager[cameraID]) && (FALSE == m_firstResult))
{
m_pPerfLockManager[cameraID]->AcquirePerfLock(m_CurrentpowerHint);
m_previousPowerHint = m_CurrentpowerHint;
m_firstResult = TRUE;
}

if (pResult->frame_number == m_longExposureFrame[cameraID])
{
if (pResult->num_output_buffers != 0)
{
CHX_LOG_INFO("Returning long exposure snapshot");
ChxUtils::AtomicStoreU32(&m_aLongExposureInProgress[cameraID], FALSE);
m_longExposureFrame[cameraID] = static_cast<UINT32>(InvalidFrameNumber);
}
}

m_HALOps[cameraID].process_capture_result(m_logicalCameraInfo[cameraID].m_pCamera3Device, pResult);

if (pResult->output_buffers != NULL)
{
for (UINT i = 0; i < pResult->num_output_buffers; i++)
{
if ((NULL != m_pPerfLockManager[cameraID]) &&
(pResult->output_buffers[i].stream->format == ChiStreamFormatBlob) &&
((pResult->output_buffers[i].stream->data_space == static_cast<android_dataspace_t>(DataspaceV0JFIF)) ||
(pResult->output_buffers[i].stream->data_space == static_cast<android_dataspace_t>(DataspaceJFIF))))
{
m_pPerfLockManager[cameraID]->ReleasePerfLock(PERF_LOCK_SNAPSHOT_CAPTURE);
break;
}
}
}
}

我们以常用的AdvancedCameraUsecase为例进行代码的梳理:

如上图所示,整个result的流转逻辑还是比较清晰的,CamX通过回调方法将结果回传给CHI中,而在CHI中,首先判断是否需要发送到具体的Feature的, 如果需要,则调用相应Feature的ProcessDriverPartialCaptureResult或者ProcessResult方法将结果发送到具体的Feature中,一旦处理完成,便会调用调用CameraUsecaseBase的ProcessAndReturnPartialMetadataFinishedResults以及ProcessAndReturnFinishedResults方法将结果发送到Usecase中,如果当前不需要发送到Feature进行处理,就在AdvancedCameraUsecase中调用CameraUsecaseBase的SessionCbPartialCaptureResult以及SessionCbCaptureResult方法,然后通过Usecase::ReturnFrameResult方法将结果发送到ExtensionModule中,之后调用ExtensionModule中存储的CamX中的回调函数process_capture_result将结果发送到CamX中的HALDevice中,之后HALDevice又通过之前存储的上层传入的回调方法,将结果最终发送到CameraDeviceSession中。

五、总结

通过以上的梳理,可以发现,整个CamX-CHI框架设计的很不错,目录结构清晰明确,框架简单高效,流程控制逻辑分明,比如针对某一图像请求,整个流程经过Usecase、Feature、Session、Pipeline并且给到具体的Node中进行处理,最终输出结果。另外,相比较之前的QCamera & Mm-Camera框架的针对某个算法的扩展需要在整个流程代码中嵌入自定义的修改做法而言,CamX-CHI通过将自定义实现的放入CHI中,提高了其扩展性,降低了开发门槛,使得平台厂商在并不是很熟悉CamX框架的情况下也可以通过小规模的修改成功添加新功能。但是人无完人,框架也是一样,该框架异步化处理太多,加大了定位问题以及解决问题的难度,给开发者带来了不小的压力。另外,框架对于内存的要求较高,所以在一些低端机型尤其是低内存机型上,整个框架的运行效率可能会受到一定的限制,进而导致相机效率低于预期。