Skytoby

Android刷新机制-SurfaceFlinger原理

Android刷新机制-SurfaceFlinger原理

一、概述

SurfaceFlinger作为负责绘制应用UI的核心,Android平台所创建的Window都是由surface所支持,所有可见的surface渲染到显示设备都是通过SurfaceFlinger来完成的。

SurfaceFlinger进程是由init进程创建,运行在独立的SurfaceFlinger进程。Android应用进程必须和SurfaceFlinger进程交互,才能完成应用UI绘制到frameBuffer(帧缓冲区),这里是通过IPC的方式进行通信的。

SurfaceComposerClient对象是一个比较重要的类,WMS通过该类中的mClient、mParent和SurfaceFlinger进行交互。

1
2
sp<ISurfaceComposerClient>  mClient;
wp<IGraphicBufferProducer> mParent;

每一个应用在SurfaceFlinger中都有一个client与之相对应,当应用执行onResume方法时流程如下:

1.WMS会请求SurfaceFlinger来绘制Surface;

2.SurfaceFlinger创建Layer;

3.一个生产者的Binder对象通过WMS传递给应用,因此应用可以直接向SurfaceFlinger发送帧消息。

二、启动过程

SurfaceFlinger启动是通过surfaceflinger.rc启动

[->native/services/surfaceflinger/surfaceflinger.rc]

1
2
3
4
5
6
7
8
9
service surfaceflinger /system/bin/surfaceflinger
class core animation
user system
group graphics drmrpc readproc
onrestart restart zygote
writepid /dev/stune/foreground/tasks
socket pdx/system/vr/display/client stream 0666 system graphics u:object_r:pdx_display_client_endpoint_socket:s0
socket pdx/system/vr/display/manager stream 0666 system graphics u:object_r:pdx_display_manager_endpoint_socket:s0
socket pdx/system/vr/display/vsync stream 0666 system graphics u:object_r:pdx_display_vsync_endpoint_socket:s0

SurfaceFlinger属于核心类,当SurfaceFlinger重启时会触发zygote重启,SurfaceFlinger服务启动是在其main函数中

2.1 main

[->native/services/surfaceflinger/main_surfaceflinger.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
int main(int, char**) {
signal(SIGPIPE, SIG_IGN);

hardware::configureRpcThreadpool(1 /* maxThreads */,
false /* callerWillJoin */);

//启动图形处理服务
startGraphicsAllocatorService();

// When SF is launched in its own process, limit the number of
// binder threads to 4.
//最大binder线程池数的个数为4
ProcessState::self()->setThreadPoolMaxThreadCount(4);

// start the thread pool
// 开启线程池
sp<ProcessState> ps(ProcessState::self());
ps->startThreadPool();

// instantiate surfaceflinger
// 创建SurfaceFlinger
sp<SurfaceFlinger> flinger = DisplayUtils::getInstance()->getSFInstance();

setpriority(PRIO_PROCESS, 0, PRIORITY_URGENT_DISPLAY);

set_sched_policy(0, SP_FOREGROUND);

// Put most SurfaceFlinger threads in the system-background cpuset
// Keeps us from unnecessarily using big cores
// Do this after the binder thread pool init
if (cpusets_enabled()) set_cpuset_policy(0, SP_SYSTEM);

// initialize before clients can connect
//初始化
flinger->init();

// publish surface flinger
// 发布surface flinger
sp<IServiceManager> sm(defaultServiceManager());
sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false,
IServiceManager::DUMP_FLAG_PRIORITY_CRITICAL | IServiceManager::DUMP_FLAG_PROTO);

// publish GpuService
// 发布Gpu服务
sp<GpuService> gpuservice = new GpuService();
sm->addService(String16(GpuService::SERVICE_NAME), gpuservice, false);

//开始显示服务
startDisplayService(); // dependency on SF getting registered above

struct sched_param param = {0};
param.sched_priority = 2;
if (sched_setscheduler(0, SCHED_FIFO, &param) != 0) {
ALOGE("Couldn't set SCHED_FIFO");
}

// run surface flinger in this thread
// 运行在当前线程
flinger->run();

return 0;
}

主要的工作如下:

1.启动图形处理服务

2.开启线程池,最大binder线程池数的个数为4

3.设置surfacefinger进程为高优先级以及后台调度策略

4.创建SurfaceFlinger,并初始化

5.注册SurfaceFlinger服务和GpuService

6.开启显示服务,最后执行surfacefinger的run方法

2.2 创建SurfaceFlinger

1
2
3
4
5
6
7
SurfaceFlinger* DisplayUtils::getSFInstance() {
if (sUseExtendedImpls) {
return new ExSurfaceFlinger();
} else {
return new SurfaceFlinger();
}
}

[->G:/AOSP/native/services/surfaceflinger/SurfaceFlinger.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
SurfaceFlinger::SurfaceFlinger(SurfaceFlinger::SkipInitializationTag)
: BnSurfaceComposer(),
mTransactionFlags(0),
mTransactionPending(false),
mAnimTransactionPending(false),
mLayersRemoved(false),
mLayersAdded(false),
mRepaintEverything(0),
mBootTime(systemTime()),
mBuiltinDisplays(),
mVisibleRegionsDirty(false),
mGeometryInvalid(false),
mAnimCompositionPending(false),
mBootStage(BootStage::BOOTLOADER),
mActiveDisplays(0),
mBuiltInBitmask(0),
mPluggableBitmask(0),
mDebugRegion(0),
mDebugDDMS(0),
mDebugDisableHWC(0),
mDebugDisableTransformHint(0),
mDebugInSwapBuffers(0),
mLastSwapBufferTime(0),
mDebugInTransaction(0),
mLastTransactionTime(0),
mForceFullDamage(false),
mPrimaryDispSync("PrimaryDispSync"),
mPrimaryHWVsyncEnabled(false),
mHWVsyncAvailable(false),
mHasPoweredOff(false),
mNumLayers(0),
mVrFlingerRequestsDisplay(false),
mMainThreadId(std::this_thread::get_id()),
mCreateBufferQueue(&BufferQueue::createBufferQueue),
mCreateNativeWindowSurface(&impl::NativeWindowSurface::create) {}

SurfaceFlinger继承于BnSurfaceComposer,flinger的数据类型为sp强指针类型,当首次被强指针引用时会执行onFirstRef方法。

2.2.1 onFirstRef

[->G:/AOSP/native/services/surfaceflinger/SurfaceFlinger.cpp]

1
2
3
4
void SurfaceFlinger::onFirstRef()
{
mEventQueue->init(this);
}
2.2.2 MQ.init

[->G:/AOSP/native/services/surfaceflinger/MessageQueue.cpp]

1
2
3
4
5
void MessageQueue::init(const sp<SurfaceFlinger>& flinger) {
mFlinger = flinger;
mLooper = new Looper(true);
mHandler = new Handler(*this);
}

handler是MessageQueue的内部类,native层的handler机制和java层的一样

2.3 SF.init

[->native/services/surfaceflinger/SurfaceFlinger.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
// Do not call property_set on main thread which will be blocked by init
// Use StartPropertySetThread instead.
void SurfaceFlinger::init() {
ALOGI( "SurfaceFlinger's main thread ready to run. "
"Initializing graphics H/W...");

ALOGI("Phase offest NS: %" PRId64 "", vsyncPhaseOffsetNs);

Mutex::Autolock _l(mStateLock);

// start the EventThread
// 启动app和sf的两个EventThread线程
mEventThreadSource =
std::make_unique<DispSyncSource>(&mPrimaryDispSync, SurfaceFlinger::vsyncPhaseOffsetNs,
true, "app");
mEventThread = std::make_unique<impl::EventThread>(mEventThreadSource.get(),
[this]() { resyncWithRateLimit(); },
impl::EventThread::InterceptVSyncsCallback(),
"appEventThread");
mSfEventThreadSource =
std::make_unique<DispSyncSource>(&mPrimaryDispSync,
SurfaceFlinger::sfVsyncPhaseOffsetNs, true, "sf");

mSFEventThread =
std::make_unique<impl::EventThread>(mSfEventThreadSource.get(),
[this]() { resyncWithRateLimit(); },
[this](nsecs_t timestamp) {
mInterceptor->saveVSyncEvent(timestamp);
},
"sfEventThread");
mEventQueue->setEventThread(mSFEventThread.get());
mVsyncModulator.setEventThreads(mSFEventThread.get(), mEventThread.get());

// Get a RenderEngine for the given display / config (can't fail)
// 获取渲染引擎
getBE().mRenderEngine =
RE::impl::RenderEngine::create(HAL_PIXEL_FORMAT_RGBA_8888,
hasWideColorDisplay
? RE::RenderEngine::WIDE_COLOR_SUPPORT
: 0);
LOG_ALWAYS_FATAL_IF(getBE().mRenderEngine == nullptr, "couldn't create RenderEngine");

LOG_ALWAYS_FATAL_IF(mVrFlingerRequestsDisplay,
"Starting with vr flinger active is not currently supported.");
//创建HWComposer
getBE().mHwc.reset(
new HWComposer(std::make_unique<Hwc2::impl::Composer>(getBE().mHwcServiceName)));
//注册回调
getBE().mHwc->registerCallback(this, getBE().mComposerSequenceId);
// Process any initial hotplug and resulting display changes.
//处理热插拔的显示设备
processDisplayHotplugEventsLocked();
LOG_ALWAYS_FATAL_IF(!getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY),
"Registered composer callback but didn't create the default primary display");

// make the default display GLContext current so that we can create textures
// when creating Layers (which may happens before we render something)
getDefaultDisplayDeviceLocked()->makeCurrent();

if (useVrFlinger) {
auto vrFlingerRequestDisplayCallback = [this] (bool requestDisplay) {
// This callback is called from the vr flinger dispatch thread. We
// need to call signalTransaction(), which requires holding
// mStateLock when we're not on the main thread. Acquiring
// mStateLock from the vr flinger dispatch thread might trigger a
// deadlock in surface flinger (see b/66916578), so post a message
// to be handled on the main thread instead.
sp<LambdaMessage> message = new LambdaMessage([=]() {
ALOGI("VR request display mode: requestDisplay=%d", requestDisplay);
mVrFlingerRequestsDisplay = requestDisplay;
signalTransaction();
});
postMessageAsync(message);
};
mVrFlinger = dvr::VrFlinger::Create(getBE().mHwc->getComposer(),
getBE().mHwc->getHwcDisplayId(HWC_DISPLAY_PRIMARY).value_or(0),
vrFlingerRequestDisplayCallback);
if (!mVrFlinger) {
ALOGE("Failed to start vrflinger");
}
}
//EventControl线程
mEventControlThread = std::make_unique<impl::EventControlThread>(
[this](bool enabled) { setVsyncEnabled(HWC_DISPLAY_PRIMARY, enabled); });

// initialize our drawing state
mDrawingState = mCurrentState;

// set initial conditions (e.g. unblank default device)
//初始化显示设备
initializeDisplays();

getBE().mRenderEngine->primeCache();

// Inform native graphics APIs whether the present timestamp is supported:
if (getHwComposer().hasCapability(
HWC2::Capability::PresentFenceIsNotReliable)) {
mStartPropertySetThread = new StartPropertySetThread(false);
} else {
mStartPropertySetThread = new StartPropertySetThread(true);
}

if (mStartPropertySetThread->Start() != NO_ERROR) {
ALOGE("Run StartPropertySetThread failed!");
}

// This is a hack. Per definition of getDataspaceSaturationMatrix, the returned matrix
// is used to saturate legacy sRGB content. However, to make sure the same color under
// Display P3 will be saturated to the same color, we intentionally break the API spec
// and apply this saturation matrix on Display P3 content. Unless the risk of applying
// such saturation matrix on Display P3 is understood fully, the API should always return
// identify matrix.
mEnhancedSaturationMatrix = getBE().mHwc->getDataspaceSaturationMatrix(HWC_DISPLAY_PRIMARY,
Dataspace::SRGB_LINEAR);

// we will apply this on Display P3.
if (mEnhancedSaturationMatrix != mat4()) {
ColorSpace srgb(ColorSpace::sRGB());
ColorSpace displayP3(ColorSpace::DisplayP3());
mat4 srgbToP3 = mat4(ColorSpaceConnector(srgb, displayP3).getTransform());
mat4 p3ToSrgb = mat4(ColorSpaceConnector(displayP3, srgb).getTransform());
mEnhancedSaturationMatrix = srgbToP3 * mEnhancedSaturationMatrix * p3ToSrgb;
}

mBuiltInBitmask.set(HWC_DISPLAY_PRIMARY);
for (int disp = HWC_DISPLAY_BUILTIN_2; disp <= HWC_DISPLAY_BUILTIN_4; disp++) {
mBuiltInBitmask.set(disp);
}

mPluggableBitmask.set(HWC_DISPLAY_EXTERNAL);
for (int disp = HWC_DISPLAY_EXTERNAL_2; disp <= HWC_DISPLAY_EXTERNAL_4; disp++) {
mPluggableBitmask.set(disp);
}

ALOGV("Done initializing");
}
2.3.1 创建HWComposer

[->native/services/surfaceflinger/DisplayHardware/HWComposer.cpp]

1
2
HWComposer::HWComposer(std::unique_ptr<android::Hwc2::Composer> composer)
: mHwcDevice(std::make_unique<HWC2::Device>(std::move(composer))) {}
2.3.2 processDisplayHotplugEventsLocked
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
void SurfaceFlinger::processDisplayHotplugEventsLocked() {
//遍历连接的设置
for (const auto& event : mPendingHotplugEvents) {
auto displayType = determineDisplayType(event.display, event.connection);
if (displayType == DisplayDevice::DISPLAY_ID_INVALID) {
ALOGW("Unable to determine the display type for display %" PRIu64, event.display);
continue;
}

if (getBE().mHwc->isUsingVrComposer() && displayType == DisplayDevice::DISPLAY_EXTERNAL) {
ALOGE("External displays are not supported by the vr hardware composer.");
continue;
}

if (!getBE().mHwc->onHotplug(event.display, displayType, event.connection)) {
continue;
}

if (event.connection == HWC2::Connection::Connected) {
if (!mBuiltinDisplays[displayType].get()) {
ALOGV("Creating built in display %d", displayType);
mBuiltinDisplays[displayType] = new BBinder();
// All non-virtual displays are currently considered secure.
DisplayDeviceState info(displayType, true);
info.displayName = displayType == DisplayDevice::DISPLAY_PRIMARY ?
"Built-in Screen" : "External Screen";
mCurrentState.displays.add(mBuiltinDisplays[displayType], info);
mInterceptor->saveDisplayCreation(info);
}
} else {
ALOGV("Removing built in display %d", displayType);

ssize_t idx = mCurrentState.displays.indexOfKey(mBuiltinDisplays[displayType]);
if (idx >= 0) {
const DisplayDeviceState& info(mCurrentState.displays.valueAt(idx));
mInterceptor->saveDisplayDeletion(info.displayId);
mCurrentState.displays.removeItemsAt(idx);
}
mBuiltinDisplays[displayType].clear();
if ((event.display >= 0) &&
(event.display < DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES)) {
// Display no longer exists.
mActiveDisplays.reset(event.display);
}
}

processDisplayChangesLocked();
}

mPendingHotplugEvents.clear();
}

这里遍历连接的显示设备,这里的显示设置主要分成3类:主设备,拓展设备,虚拟设备,具体的处理操作在processDisplayChangesLocked函数中,见2.4.3节

1
2
3
4
5
6
7
enum DisplayType {
DISPLAY_ID_INVALID = -1,
DISPLAY_PRIMARY = HWC_DISPLAY_PRIMARY,
DISPLAY_EXTERNAL = HWC_DISPLAY_EXTERNAL,
DISPLAY_VIRTUAL = HWC_DISPLAY_VIRTUAL,
NUM_BUILTIN_DISPLAY_TYPES = HWC_NUM_PHYSICAL_DISPLAY_TYPES,
};
2.3.3 processDisplayChangesLocked
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
void SurfaceFlinger::processDisplayChangesLocked() {
// here we take advantage of Vector's copy-on-write semantics to
// improve performance by skipping the transaction entirely when
// know that the lists are identical
const KeyedVector<wp<IBinder>, DisplayDeviceState>& curr(mCurrentState.displays);
const KeyedVector<wp<IBinder>, DisplayDeviceState>& draw(mDrawingState.displays);
if (!curr.isIdenticalTo(draw)) {
mVisibleRegionsDirty = true;
const size_t cc = curr.size();
size_t dc = draw.size();

// find the displays that were removed
// (ie: in drawing state but not in current state)
// also handle displays that changed
// (ie: displays that are in both lists)
for (size_t i = 0; i < dc;) {
const ssize_t j = curr.indexOfKey(draw.keyAt(i));
if (j < 0) {
// in drawing state but not in current state
// Call makeCurrent() on the primary display so we can
// be sure that nothing associated with this display
// is current.
const sp<const DisplayDevice> defaultDisplay(getDefaultDisplayDeviceLocked());
if (defaultDisplay != nullptr) defaultDisplay->makeCurrent();
sp<DisplayDevice> hw(getDisplayDeviceLocked(draw.keyAt(i)));
if (hw != nullptr) hw->disconnect(getHwComposer());
if (draw[i].type < DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES)
mEventThread->onHotplugReceived(draw[i].type, false);
mDisplays.removeItem(draw.keyAt(i));
} else {
// this display is in both lists. see if something changed.
const DisplayDeviceState& state(curr[j]);
const wp<IBinder>& display(curr.keyAt(j));
const sp<IBinder> state_binder = IInterface::asBinder(state.surface);
const sp<IBinder> draw_binder = IInterface::asBinder(draw[i].surface);
if (state_binder != draw_binder) {
// changing the surface is like destroying and
// recreating the DisplayDevice, so we just remove it
// from the drawing state, so that it get re-added
// below.
sp<DisplayDevice> hw(getDisplayDeviceLocked(display));
if (hw != nullptr) hw->disconnect(getHwComposer());
mDisplays.removeItem(display);
mDrawingState.displays.removeItemsAt(i);
dc--;
// at this point we must loop to the next item
continue;
}

const sp<DisplayDevice> disp(getDisplayDeviceLocked(display));
if (disp != nullptr) {
if (state.layerStack != draw[i].layerStack) {
disp->setLayerStack(state.layerStack);
}
if ((state.orientation != draw[i].orientation) ||
(state.viewport != draw[i].viewport) || (state.frame != draw[i].frame)) {
disp->setProjection(state.orientation, state.viewport, state.frame);
}
if (state.width != draw[i].width || state.height != draw[i].height) {
disp->setDisplaySize(state.width, state.height);
}
}
}
++i;
}

// find displays that were added
// (ie: in current state but not in drawing state)
for (size_t i = 0; i < cc; i++) {
if (draw.indexOfKey(curr.keyAt(i)) < 0) {
const DisplayDeviceState& state(curr[i]);

sp<DisplaySurface> dispSurface;
sp<IGraphicBufferProducer> producer;
sp<IGraphicBufferProducer> bqProducer;
sp<IGraphicBufferConsumer> bqConsumer;
//创建BufferQueue的生产者和消费者
mCreateBufferQueue(&bqProducer, &bqConsumer, false);

int32_t hwcId = -1;
if (state.isVirtualDisplay()) {
// Virtual displays without a surface are dormant:
// they have external state (layer stack, projection,
// etc.) but no internal state (i.e. a DisplayDevice).
if (state.surface != nullptr) {
// Allow VR composer to use virtual displays.
if (mUseHwcVirtualDisplays || getBE().mHwc->isUsingVrComposer()) {
DisplayUtils *displayUtils = DisplayUtils::getInstance();
int width = 0;
int status = state.surface->query(NATIVE_WINDOW_WIDTH, &width);
ALOGE_IF(status != NO_ERROR, "Unable to query width (%d)", status);
int height = 0;
status = state.surface->query(NATIVE_WINDOW_HEIGHT, &height);
ALOGE_IF(status != NO_ERROR, "Unable to query height (%d)", status);
int intFormat = 0;
status = state.surface->query(NATIVE_WINDOW_FORMAT, &intFormat);
ALOGE_IF(status != NO_ERROR, "Unable to query format (%d)", status);
auto format = static_cast<ui::PixelFormat>(intFormat);

if (maxVirtualDisplaySize == 0 ||
( (uint64_t)width <= maxVirtualDisplaySize &&
(uint64_t)height <= maxVirtualDisplaySize)) {
uint64_t usage = 0;
// Replace with native_window_get_consumer_usage ?
status = state.surface->getConsumerUsage(&usage);
ALOGW_IF(status != NO_ERROR, "Unable to query usage (%d)", status);
if ( (status == NO_ERROR) &&
displayUtils->canAllocateHwcDisplayIdForVDS(usage)) {
getBE().mHwc->allocateVirtualDisplay(
width, height, &format, &hwcId);
}
}
}

// TODO: Plumb requested format back up to consumer
DisplayUtils::getInstance()->initVDSInstance(*getBE().mHwc,
hwcId, state.surface,
dispSurface, producer,
bqProducer, bqConsumer,
state.displayName, state.isSecure);
}
} else {
ALOGE_IF(state.surface != nullptr,
"adding a supported display, but rendering "
"surface is provided (%p), ignoring it",
state.surface.get());

hwcId = state.type;
dispSurface = new FramebufferSurface(*getBE().mHwc, hwcId, bqConsumer);
producer = bqProducer;
}

const wp<IBinder>& display(curr.keyAt(i));
if (dispSurface != nullptr) {
mDisplays.add(display,
setupNewDisplayDeviceInternal(display, hwcId, state, dispSurface,
producer));
if (!state.isVirtualDisplay()) {
mEventThread->onHotplugReceived(state.type, true);
}
}
}
}
}

mDrawingState.displays = mCurrentState.displays;
}
2.3.4 initializeDisplays

[->native/services/surfaceflinger/SurfaceFlinger.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
void SurfaceFlinger::initializeDisplays() {
class MessageScreenInitialized : public MessageBase {
SurfaceFlinger* flinger;
public:
explicit MessageScreenInitialized(SurfaceFlinger* flinger) : flinger(flinger) { }
virtual bool handler() {
flinger->onInitializeDisplays();
return true;
}
};
sp<MessageBase> msg = new MessageScreenInitialized(this);
postMessageAsync(msg); // we may be called from main thread, use async message
}

void SurfaceFlinger::onInitializeDisplays() {
// reset screen orientation and use primary layer stack
Vector<ComposerState> state;
Vector<DisplayState> displays;
DisplayState d;
d.what = DisplayState::eDisplayProjectionChanged |
DisplayState::eLayerStackChanged;
d.token = mBuiltinDisplays[DisplayDevice::DISPLAY_PRIMARY];
d.layerStack = 0;
d.orientation = DisplayState::eOrientationDefault;
d.frame.makeInvalid();
d.viewport.makeInvalid();
d.width = 0;
d.height = 0;
displays.add(d);
setTransactionState(state, displays, 0);
setPowerModeInternal(getDisplayDevice(d.token), HWC_POWER_MODE_NORMAL,
/*stateLockHeld*/ false);

const auto& activeConfig = getBE().mHwc->getActiveConfig(HWC_DISPLAY_PRIMARY);
const nsecs_t period = activeConfig->getVsyncPeriod();
mAnimFrameTracker.setDisplayRefreshPeriod(period);

// Use phase of 0 since phase is not known.
// Use latency of 0, which will snap to the ideal latency.
setCompositorTimingSnapped(0, period, 0);
}

这里通过handler发送消息,进行显示设备的初始化操作。

2.4 EventThread线程

[->native/services/surfaceflinger/EventThread.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
EventThread::EventThread(VSyncSource* src, ResyncWithRateLimitCallback resyncWithRateLimitCallback,
InterceptVSyncsCallback interceptVSyncsCallback, const char* threadName)
: mVSyncSource(src),
mResyncWithRateLimitCallback(resyncWithRateLimitCallback),
mInterceptVSyncsCallback(interceptVSyncsCallback) {
for (auto& event : mVSyncEvent) {
event.header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC;
event.header.id = 0;
event.header.timestamp = 0;
event.vsync.count = 0;
}

mThread = std::thread(&EventThread::threadMain, this);

pthread_setname_np(mThread.native_handle(), threadName);

pid_t tid = pthread_gettid_np(mThread.native_handle());

// Use SCHED_FIFO to minimize jitter
constexpr int EVENT_THREAD_PRIORITY = 2;
struct sched_param param = {0};
param.sched_priority = EVENT_THREAD_PRIORITY;
if (pthread_setschedparam(mThread.native_handle(), SCHED_FIFO, &param) != 0) {
ALOGE("Couldn't set SCHED_FIFO for EventThread");
}

set_sched_policy(tid, SP_FOREGROUND);
}

EventThread继承于VSyncSource::Callback

2.4.1 onFirstRef
1
2
3
4
void EventThread::Connection::onFirstRef() {
// NOTE: mEventThread doesn't hold a strong reference on us
mEventThread->registerDisplayEventConnection(this);
}

注册显示设备事件

2.4.2 threadMain
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
void EventThread::threadMain() NO_THREAD_SAFETY_ANALYSIS {
std::unique_lock<std::mutex> lock(mMutex);
while (mKeepRunning) {
DisplayEventReceiver::Event event;
Vector<sp<EventThread::Connection> > signalConnections;
//见2.4.3节
signalConnections = waitForEventLocked(&lock, &event);

// dispatch events to listeners...
const size_t count = signalConnections.size();
for (size_t i = 0; i < count; i++) {
const sp<Connection>& conn(signalConnections[i]);
// now see if we still need to report this event
//分发事件
status_t err = conn->
postEvent(event);
if (err == -EAGAIN || err == -EWOULDBLOCK) {
// The destination doesn't accept events anymore, it's probably
// full. For now, we just drop the events on the floor.
// FIXME: Note that some events cannot be dropped and would have
// to be re-sent later.
// Right-now we don't have the ability to do this.
ALOGW("EventThread: dropping event (%08x) for connection %p", event.header.type,
conn.get());
} else if (err < 0) {
// handle any other error on the pipe as fatal. the only
// reasonable thing to do is to clean-up this connection.
// The most common error we'll get here is -EPIPE.
//清除连接
removeDisplayEventConnectionLocked(signalConnections[i]);
}
}
}
}
2.4.3 waitForEventLocked
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
// This will return when (1) a vsync event has been received, and (2) there was
// at least one connection interested in receiving it when we started waiting.
Vector<sp<EventThread::Connection> > EventThread::waitForEventLocked(
std::unique_lock<std::mutex>* lock, DisplayEventReceiver::Event* event) {
Vector<sp<EventThread::Connection> > signalConnections;

while (signalConnections.isEmpty() && mKeepRunning) {
bool eventPending = false;
bool waitForVSync = false;

size_t vsyncCount = 0;
nsecs_t timestamp = 0;
for (int32_t i = 0; i < DisplayDevice::NUM_BUILTIN_DISPLAY_TYPES; i++) {
timestamp = mVSyncEvent[i].header.timestamp;
if (timestamp) {
// we have a vsync event to dispatch
if (mInterceptVSyncsCallback) {
mInterceptVSyncsCallback(timestamp);
}
*event = mVSyncEvent[i];
mVSyncEvent[i].header.timestamp = 0;
vsyncCount = mVSyncEvent[i].vsync.count;
break;
}
}

// find out connections waiting for events
size_t count = mDisplayEventConnections.size();
if (!timestamp && count) {
// no vsync event, see if there are some other event
// 没有vsync事件查看其他事件
eventPending = !mPendingEvents.isEmpty();
if (eventPending) {
// we have some other event to dispatch
*event = mPendingEvents[0];
mPendingEvents.removeAt(0);
}
}

for (size_t i = 0; i < count;) {
sp<Connection> connection(mDisplayEventConnections[i].promote());
if (connection != nullptr) {
bool added = false;
if (connection->count >= 0) {
// we need vsync events because at least
// one connection is waiting for it
//需要vysnc事件,因为至少需要一个连接正在等待vsync
waitForVSync = true;
if (timestamp) {
// we consume the event only if it's time
// (ie: we received a vsync event)
if (connection->count == 0) {
// fired this time around
connection->count = -1;
signalConnections.add(connection);
added = true;
} else if (connection->count == 1 ||
(vsyncCount % connection->count) == 0) {
// continuous event, and time to report it
signalConnections.add(connection);
added = true;
}
}
}

if (eventPending && !timestamp && !added) {
// we don't have a vsync event to process
// (timestamp==0), but we have some pending
// messages.
signalConnections.add(connection);
}
++i;
} else {
// we couldn't promote this reference, the connection has
// died, so clean-up!
mDisplayEventConnections.removeAt(i);
--count;
}
}

// Here we figure out if we need to enable or disable vsyncs
if (timestamp && !waitForVSync) {
// we received a VSYNC but we have no clients
// don't report it, and disable VSYNC events
//接收Vsync,但没有client需要它,则直接关闭VYSNC
disableVSyncLocked();
} else if (!timestamp && waitForVSync) {
// we have at least one client, so we want vsync enabled
// (TODO: this function is called right after we finish
// notifying clients of a vsync, so this call will be made
// at the vsync rate, e.g. 60fps. If we can accurately
// track the current state we could avoid making this call
// so often.)
//至少存在一个Client,则需要使能VSYNC
enableVSyncLocked();
}

// note: !timestamp implies signalConnections.isEmpty(), because we
// don't populate signalConnections if there's no vsync pending
if (!timestamp && !eventPending) {
// wait for something to happen
if (waitForVSync) {
// This is where we spend most of our time, waiting
// for vsync events and new client registrations.
//
// If the screen is off, we can't use h/w vsync, so we
// use a 16ms timeout instead. It doesn't need to be
// precise, we just need to keep feeding our clients.
//
// We don't want to stall if there's a driver bug, so we
// use a (long) timeout when waiting for h/w vsync, and
// generate fake events when necessary.
bool softwareSync = mUseSoftwareVSync;
auto timeout = softwareSync ? 16ms : 1000ms;
if (mCondition.wait_for(*lock, timeout) == std::cv_status::timeout) {
if (!softwareSync) {
ALOGW("Timed out waiting for hw vsync; faking it");
}
// FIXME: how do we decide which display id the fake
// vsync came from ?
mVSyncEvent[0].header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC;
mVSyncEvent[0].header.id = DisplayDevice::DISPLAY_PRIMARY;
mVSyncEvent[0].header.timestamp = systemTime(SYSTEM_TIME_MONOTONIC);
mVSyncEvent[0].vsync.count++;
}
} else {
// Nobody is interested in vsync, so we just want to sleep.
// h/w vsync should be disabled, so this will wait until we
// get a new connection, or an existing connection becomes
// interested in receiving vsync again.
//不存在对vsync感兴趣的连接,则进入休眠
mCondition.wait(*lock);
}
}
}

// here we're guaranteed to have a timestamp and some connections to signal
// (The connections might have dropped out of mDisplayEventConnections
// while we were asleep, but we'll still have strong references to them.)
return signalConnections;
}

EventThread线程进入mCondition的wait方法,等待唤醒

2.5 setEventThread

[->native/services/surfaceflinger/MessageQueue.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
void MessageQueue::setEventThread(android::EventThread* eventThread) {
if (mEventThread == eventThread) {
return;
}

if (mEventTube.getFd() >= 0) {
mLooper->removeFd(mEventTube.getFd());
}

mEventThread = eventThread;
mEvents = eventThread->createEventConnection();
mEvents->stealReceiveChannel(&mEventTube);
mLooper->addFd(mEventTube.getFd(), 0, Looper::EVENT_INPUT, MessageQueue::cb_eventReceiver,
this);
}

这里主要是监听EventTube(类型为BitTube),当有数据来的时候,调用cb_eventReceiver方法

2.6 SF.run

[->native/services/surfaceflinger/SurfaceFlinger.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
void SurfaceFlinger::run() {
do {
waitForEvent();
} while (true);
}

void SurfaceFlinger::waitForEvent() {
mEventQueue->waitMessage();
}

void MessageQueue::waitMessage() {
do {
IPCThreadState::self()->flushCommands();
int32_t ret = mLooper->pollOnce(-1);
switch (ret) {
case Looper::POLL_WAKE:
case Looper::POLL_CALLBACK:
continue;
case Looper::POLL_ERROR:
ALOGE("Looper::POLL_ERROR");
continue;
case Looper::POLL_TIMEOUT:
// timeout (should not happen)
continue;
default:
// should not happen
ALOGE("Looper::pollOnce() returned unknown status %d", ret);
continue;
}
} while (true);
}

这里是一个while循环,一直在等待消息,如果有消息就进行处理。

三、Vsync信号

前面2.4.1创建HWComposer过程中,会注册一些回调方法。

3.1 registerCallback

1
2
3
4
5
6
//创建HWComposer
getBE().mHwc.reset(
new HWComposer(std::make_unique<Hwc2::impl::Composer>(getBE().mHwcServiceName)));
//注册回调
getBE().mHwc->registerCallback(this, getBE().mComposerSequenceId); //注册回调
getBE().mHwc->registerCallback(this, getBE().mComposerSequenceId);

当硬件产生Vsync信号时,则会回调onVsyncReceived方法,SurfaceFlinger继承ComposerCallback。

1
2
3
4
5
6
7
8
9
10
class ComposerCallback {
public:
virtual void onHotplugReceived(int32_t sequenceId, hwc2_display_t display,
Connection connection) = 0;
virtual void onRefreshReceived(int32_t sequenceId,
hwc2_display_t display) = 0;
virtual void onVsyncReceived(int32_t sequenceId, hwc2_display_t display,
int64_t timestamp) = 0;
virtual ~ComposerCallback() = default;
};

3.2 onVsyncReceived

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
void SurfaceFlinger::onVsyncReceived(int32_t sequenceId,
hwc2_display_t displayId, int64_t timestamp) {
Mutex::Autolock lock(mStateLock);
// Ignore any vsyncs from a previous hardware composer.
if (sequenceId != getBE().mComposerSequenceId) {
return;
}

int32_t type;
if (!getBE().mHwc->onVsync(displayId, timestamp, &type)) {
return;
}

bool needsHwVsync = false;

{ // Scope for the lock
Mutex::Autolock _l(mHWVsyncLock);
if (type == DisplayDevice::DISPLAY_PRIMARY && mPrimaryHWVsyncEnabled) {
needsHwVsync = mPrimaryDispSync.addResyncSample(timestamp);
}
}

if (needsHwVsync) {
enableHardwareVsync();
} else {
disableHardwareVsync(false);
}
}

3.3 addResyncSample

[->native/services/surfaceflinger/DispSync.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
bool DispSync::addResyncSample(nsecs_t timestamp) {
Mutex::Autolock lock(mMutex);

ALOGV("[%s] addResyncSample(%" PRId64 ")", mName, ns2us(timestamp));

size_t idx = (mFirstResyncSample + mNumResyncSamples) % MAX_RESYNC_SAMPLES;
mResyncSamples[idx] = timestamp;
if (mNumResyncSamples == 0) {
mPhase = 0;
mReferenceTime = timestamp;
ALOGV("[%s] First resync sample: mPeriod = %" PRId64 ", mPhase = 0, "
"mReferenceTime = %" PRId64,
mName, ns2us(mPeriod), ns2us(mReferenceTime));
mThread->updateModel(mPeriod, mPhase, mReferenceTime);
}

if (mNumResyncSamples < MAX_RESYNC_SAMPLES) {
mNumResyncSamples++;
} else {
mFirstResyncSample = (mFirstResyncSample + 1) % MAX_RESYNC_SAMPLES;
}
//见3.4节
updateModelLocked();

if (mNumResyncSamplesSincePresent++ > MAX_RESYNC_SAMPLES_WITHOUT_PRESENT) {
resetErrorLocked();
}

if (mIgnorePresentFences) {
// If we don't have the sync framework we will never have
// addPresentFence called. This means we have no way to know whether
// or not we're synchronized with the HW vsyncs, so we just request
// that the HW vsync events be turned on whenever we need to generate
// SW vsync events.
return mThread->hasAnyEventListeners();
}

// Check against kErrorThreshold / 2 to add some hysteresis before having to
// resync again
bool modelLocked = mModelUpdated && mError < (kErrorThreshold / 2);
ALOGV("[%s] addResyncSample returning %s", mName, modelLocked ? "locked" : "unlocked");
return !modelLocked;
}
3.3.1 DispSync初始化
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
void DispSync::init(bool hasSyncFramework, int64_t dispSyncPresentTimeOffset) {
mIgnorePresentFences = !hasSyncFramework;
mPresentTimeOffset = dispSyncPresentTimeOffset;
mThread->run("DispSync", PRIORITY_URGENT_DISPLAY + PRIORITY_MORE_FAVORABLE);

// set DispSync to SCHED_FIFO to minimize jitter
struct sched_param param = {0};
param.sched_priority = 2;
if (sched_setscheduler(mThread->getTid(), SCHED_FIFO, &param) != 0) {
ALOGE("Couldn't set SCHED_FIFO for DispSyncThread");
}

reset();
beginResync();

if (kTraceDetailedInfo) {
// If we're not getting present fences then the ZeroPhaseTracer
// would prevent HW vsync event from ever being turned off.
// Even if we're just ignoring the fences, the zero-phase tracing is
// not needed because any time there is an event registered we will
// turn on the HW vsync events.
if (!mIgnorePresentFences && kEnableZeroPhaseTracer) {
mZeroPhaseTracer = std::make_unique<ZeroPhaseTracer>();
addEventListener("ZeroPhaseTracer", 0, mZeroPhaseTracer.get());
}
}
}
3.3.2 DispSyncThread.run
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
virtual bool threadLoop() {
status_t err;
nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);

while (true) {
Vector<CallbackInvocation> callbackInvocations;

nsecs_t targetTime = 0;

{ // Scope for lock
Mutex::Autolock lock(mMutex);

if (kTraceDetailedInfo) {
ATRACE_INT64("DispSync:Frame", mFrameNumber);
}
ALOGV("[%s] Frame %" PRId64, mName, mFrameNumber);
++mFrameNumber;

if (mStop) {
return false;
}

if (mPeriod == 0) {
err = mCond.wait(mMutex);
if (err != NO_ERROR) {
ALOGE("error waiting for new events: %s (%d)", strerror(-err), err);
return false;
}
continue;
}

targetTime = computeNextEventTimeLocked(now);

bool isWakeup = false;

if (now < targetTime) {
if (kTraceDetailedInfo) ATRACE_NAME("DispSync waiting");

if (targetTime == INT64_MAX) {
ALOGV("[%s] Waiting forever", mName);
err = mCond.wait(mMutex);
} else {
ALOGV("[%s] Waiting until %" PRId64, mName, ns2us(targetTime));
err = mCond.waitRelative(mMutex, targetTime - now);
}

if (err == TIMED_OUT) {
isWakeup = true;
} else if (err != NO_ERROR) {
ALOGE("error waiting for next event: %s (%d)", strerror(-err), err);
return false;
}
}

now = systemTime(SYSTEM_TIME_MONOTONIC);

// Don't correct by more than 1.5 ms
static const nsecs_t kMaxWakeupLatency = us2ns(1500);

if (isWakeup) {
mWakeupLatency = ((mWakeupLatency * 63) + (now - targetTime)) / 64;
mWakeupLatency = min(mWakeupLatency, kMaxWakeupLatency);
if (kTraceDetailedInfo) {
ATRACE_INT64("DispSync:WakeupLat", now - targetTime);
ATRACE_INT64("DispSync:AvgWakeupLat", mWakeupLatency);
}
}

callbackInvocations = gatherCallbackInvocationsLocked(now);
}

if (callbackInvocations.size() > 0) {
fireCallbackInvocations(callbackInvocations);
}
}

return false;
}

3.4 DS.updateModelLocked

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
void DispSync::updateModelLocked() {
ALOGV("[%s] updateModelLocked %zu", mName, mNumResyncSamples);
if (mNumResyncSamples >= MIN_RESYNC_SAMPLES_FOR_UPDATE) {
ALOGV("[%s] Computing...", mName);
nsecs_t durationSum = 0;
nsecs_t minDuration = INT64_MAX;
nsecs_t maxDuration = 0;
for (size_t i = 1; i < mNumResyncSamples; i++) {
size_t idx = (mFirstResyncSample + i) % MAX_RESYNC_SAMPLES;
size_t prev = (idx + MAX_RESYNC_SAMPLES - 1) % MAX_RESYNC_SAMPLES;
nsecs_t duration = mResyncSamples[idx] - mResyncSamples[prev];
durationSum += duration;
minDuration = min(minDuration, duration);
maxDuration = max(maxDuration, duration);
}

// Exclude the min and max from the average
durationSum -= minDuration + maxDuration;
mPeriod = durationSum / (mNumResyncSamples - 3);

ALOGV("[%s] mPeriod = %" PRId64, mName, ns2us(mPeriod));

double sampleAvgX = 0;
double sampleAvgY = 0;
double scale = 2.0 * M_PI / double(mPeriod);
// Intentionally skip the first sample
for (size_t i = 1; i < mNumResyncSamples; i++) {
size_t idx = (mFirstResyncSample + i) % MAX_RESYNC_SAMPLES;
nsecs_t sample = mResyncSamples[idx] - mReferenceTime;
double samplePhase = double(sample % mPeriod) * scale;
sampleAvgX += cos(samplePhase);
sampleAvgY += sin(samplePhase);
}

sampleAvgX /= double(mNumResyncSamples - 1);
sampleAvgY /= double(mNumResyncSamples - 1);

mPhase = nsecs_t(atan2(sampleAvgY, sampleAvgX) / scale);

ALOGV("[%s] mPhase = %" PRId64, mName, ns2us(mPhase));

if (mPhase < -(mPeriod / 2)) {
mPhase += mPeriod;
ALOGV("[%s] Adjusting mPhase -> %" PRId64, mName, ns2us(mPhase));
}

if (kTraceDetailedInfo) {
ATRACE_INT64("DispSync:Period", mPeriod);
ATRACE_INT64("DispSync:Phase", mPhase + mPeriod / 2);
}

// Artificially inflate the period if requested.
mPeriod += mPeriod * mRefreshSkipCount;
//见3.5节
mThread->updateModel(mPeriod, mPhase, mReferenceTime);
mModelUpdated = true;
}
}

3.5 DST.updateModel

[->native/services/surfaceflinger/DispSync.cpp]

1
2
3
4
5
6
7
8
9
10
11
12
void updateModel(nsecs_t period, nsecs_t phase, nsecs_t referenceTime) {
if (kTraceDetailedInfo) ATRACE_CALL();
Mutex::Autolock lock(mMutex);
mPeriod = period;
mPhase = phase;
mReferenceTime = referenceTime;
ALOGV("[%s] updateModel: mPeriod = %" PRId64 ", mPhase = %" PRId64
" mReferenceTime = %" PRId64,
mName, ns2us(mPeriod), ns2us(mPhase), ns2us(mReferenceTime));
//唤醒目标线程
mCond.signal();
}

后面进入到DispSyncThread线程,线程里面具体的执行方法在3.3.2中有详细介绍,这里主要看下 fireCallbackInvocations方法。

1
2
3
4
5
6
void fireCallbackInvocations(const Vector<CallbackInvocation>& callbacks) {
if (kTraceDetailedInfo) ATRACE_CALL();
for (size_t i = 0; i < callbacks.size(); i++) {
callbacks[i].mCallback->onDispSyncEvent(callbacks[i].mEventTime);
}
}

在SurfaceFlinger里面有调用init方法,其中创建了DispSyncSource对象,这里是调用了DispSyncSource的onDispSyncEvent方法。

3.6 DSS.onDispSyncEvent

[->native/services/surfaceflinger/SurfaceFlinger.cpp::DispSyncSource]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
virtual void onDispSyncEvent(nsecs_t when) {
VSyncSource::Callback* callback;
{
Mutex::Autolock lock(mCallbackMutex);
callback = mCallback;

if (mTraceVsync) {
mValue = (mValue + 1) % 2;
ATRACE_INT(mVsyncEventLabel.string(), mValue);
}
}

if (callback != nullptr) {
callback->onVSyncEvent(when);
}
}

3.7 onVSyncEvent

1
2
3
4
5
6
7
8
void EventThread::onVSyncEvent(nsecs_t timestamp) {
std::lock_guard<std::mutex> lock(mMutex);
mVSyncEvent[0].header.type = DisplayEventReceiver::DISPLAY_EVENT_VSYNC;
mVSyncEvent[0].header.id = 0;
mVSyncEvent[0].header.timestamp = timestamp;
mVSyncEvent[0].vsync.count++;
mCondition.notify_all();
}

mCondition.notify_all能够唤醒处理waitForEventLocked的EventThread(2.4.2),并执行postEvent

3.8 ET.postEvent

1
2
3
4
status_t EventThread::Connection::postEvent(const DisplayEventReceiver::Event& event) {
ssize_t size = DisplayEventReceiver::sendEvents(&mChannel, &event, 1);
return size < 0 ? status_t(size) : status_t(NO_ERROR);
}

3.9 DER.sendEvents

[->native\libs\gui\DisplayEventReceiver.cpp]

1
2
3
4
5
ssize_t DisplayEventReceiver::sendEvents(gui::BitTube* dataChannel,
Event const* events, size_t count)
{
return gui::BitTube::sendObjects(dataChannel, events, count);
}

在2.5节中有监听BitTube,此处调用sendObjects,当收到数据时,则调用回调方法。

3.9.1 MQ.cb_eventReceiver
1
2
3
4
int MessageQueue::cb_eventReceiver(int fd, int events, void* data) {
MessageQueue* queue = reinterpret_cast<MessageQueue*>(data);
return queue->eventReceiver(fd, events);
}
3.9.2 MQ.eventReceiver
1
2
3
4
5
6
7
8
9
10
11
12
13
int MessageQueue::eventReceiver(int /*fd*/, int /*events*/) {
ssize_t n;
DisplayEventReceiver::Event buffer[8];
while ((n = DisplayEventReceiver::getEvents(&mEventTube, buffer, 8)) > 0) {
for (int i = 0; i < n; i++) {
if (buffer[i].header.type == DisplayEventReceiver::DISPLAY_EVENT_VSYNC) {
mHandler->dispatchInvalidate();
break;
}
}
}
return 1;
}

3.10 MQ.dispatchInvalidate

1
2
3
4
5
void MessageQueue::Handler::dispatchInvalidate() {
if ((android_atomic_or(eventMaskInvalidate, &mEventMask) & eventMaskInvalidate) == 0) {
mQueue.mLooper->sendMessage(this, Message(MessageQueue::INVALIDATE));
}
}

3.11 MQ.handleMessage

1
2
3
4
5
6
7
8
9
10
11
12
void MessageQueue::Handler::handleMessage(const Message& message) {
switch (message.what) {
case INVALIDATE:
android_atomic_and(~eventMaskInvalidate, &mEventMask);
mQueue.mFlinger->onMessageReceived(message.what);
break;
case REFRESH:
android_atomic_and(~eventMaskRefresh, &mEventMask);
mQueue.mFlinger->onMessageReceived(message.what);
break;
}
}

3.12 SF.onMessageReceived

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
void SurfaceFlinger::onMessageReceived(int32_t what) {
ATRACE_CALL();
switch (what) {
case MessageQueue::INVALIDATE: {
bool frameMissed = !mHadClientComposition &&
mPreviousPresentFence != Fence::NO_FENCE &&
(mPreviousPresentFence->getSignalTime() ==
Fence::SIGNAL_TIME_PENDING);
ATRACE_INT("FrameMissed", static_cast<int>(frameMissed));
if (frameMissed) {
mTimeStats.incrementMissedFrames();
if (mPropagateBackpressure) {
signalLayerUpdate();
break;
}
}

if (mDolphinFuncsEnabled) {
int maxQueuedFrames = 0;
mDrawingState.traverseInZOrder([&](Layer* layer) {
if (layer->hasQueuedFrame() &&
layer->shouldPresentNow(mPrimaryDispSync)) {
int layerQueuedFrames = layer->getQueuedFrameCount();
if (maxQueuedFrames < layerQueuedFrames &&
!layer->visibleNonTransparentRegion.isEmpty()) {
maxQueuedFrames = layerQueuedFrames;
}
}
});
if(mDolphinMonitor(maxQueuedFrames)) {
signalLayerUpdate();
break;
}
}

// Now that we're going to make it to the handleMessageTransaction()
// call below it's safe to call updateVrFlinger(), which will
// potentially trigger a display handoff.
updateVrFlinger();

bool refreshNeeded = handleMessageTransaction();
refreshNeeded |= handleMessageInvalidate();
refreshNeeded |= mRepaintEverything;
//如果需要刷新
if (refreshNeeded && CC_LIKELY(mBootStage != BootStage::BOOTLOADER)) {
// Signal a refresh if a transaction modified the window state,
// a new buffer was latched, or if HWC has requested a full
// repaint
if (mDolphinFuncsEnabled) {
mDolphinRefresh();
}
signalRefresh();
}
break;
}
case MessageQueue::REFRESH: {
handleMessageRefresh();
break;
}
}
}

四、图像输出

上面经过Vsync信号后,经过层层调用到onMessageReceived方法,如果屏幕刷新,则会调用到handleMessageRefresh方法流程,具体如下:

4.1 SF.handleMessageRefresh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
void SurfaceFlinger::handleMessageRefresh() {
ATRACE_CALL();

mRefreshPending = false;

nsecs_t refreshStartTime = systemTime(SYSTEM_TIME_MONOTONIC);
//主要是这个5步操作
preComposition(refreshStartTime);
rebuildLayerStacks();
setUpHWComposer();
doDebugFlashRegions();
doTracing("handleRefresh");
logLayerStats();
doComposition();
postComposition(refreshStartTime);

int id = getVsyncSource();
mPreviousPresentFence = (id != -1) ? getBE().mHwc->getPresentFence(id) : Fence::NO_FENCE;
ALOGV("Checking for backpressure against %d retire fence", id);

mHadClientComposition = false;
for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
const sp<DisplayDevice>& displayDevice = mDisplays[displayId];
mHadClientComposition = mHadClientComposition ||
getBE().mHwc->hasClientComposition(displayDevice->getHwcDisplayId());
}
mVsyncModulator.onRefreshed(mHadClientComposition);

mLayersWithQueuedFrames.clear();
}

4.2 SF.preComposition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
void SurfaceFlinger::preComposition(nsecs_t refreshStartTime)
{
ATRACE_CALL();
ALOGV("preComposition");

bool needExtraInvalidate = false;
mDrawingState.traverseInZOrder([&](Layer* layer) {
//回调每一个图层的onPreComposition方法
if (layer->onPreComposition(refreshStartTime)) {
needExtraInvalidate = true;
}
});
//当图层信息
if (needExtraInvalidate) {
signalLayerUpdate();
}
}

4.3 SF.rebuildLayerStacks

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
void SurfaceFlinger::rebuildLayerStacks() {
ATRACE_CALL();
ALOGV("rebuildLayerStacks");
Mutex::Autolock lock(mDolphinStateLock);

// rebuild the visible layer list per screen
// 重建每个显示屏中可见的图层列表
if (CC_UNLIKELY(mVisibleRegionsDirty)) {
ATRACE_NAME("rebuildLayerStacks VR Dirty");
mVisibleRegionsDirty = false;
invalidateHwcGeometry();

for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
Region opaqueRegion;
Region dirtyRegion;
Vector<sp<Layer>> layersSortedByZ;
Vector<sp<Layer>> layersNeedingFences;
const sp<DisplayDevice>& displayDevice(mDisplays[dpy]);
const Transform& tr(displayDevice->getTransform());
const Rect bounds(displayDevice->getBounds());
if (displayDevice->isDisplayOn()) {
//计算每个layer的可见区域
computeVisibleRegions(displayDevice, dirtyRegion, opaqueRegion);

mDrawingState.traverseInZOrder([&](Layer* layer) {
bool hwcLayerDestroyed = false;
//LayerStack一样
if (layer->belongsToDisplay(displayDevice->getLayerStack(),
displayDevice->isPrimary())) {
Region drawRegion(tr.transform(
layer->visibleNonTransparentRegion));
drawRegion.andSelf(bounds);
if (!drawRegion.isEmpty()) {
layersSortedByZ.add(layer);
} else {
// Clear out the HWC layer if this layer was
// previously visible, but no longer is
hwcLayerDestroyed = layer->destroyHwcLayer(
displayDevice->getHwcDisplayId());
}
} else {
// WM changes displayDevice->layerStack upon sleep/awake.
// Here we make sure we delete the HWC layers even if
// WM changed their layer stack.
hwcLayerDestroyed = layer->destroyHwcLayer(
displayDevice->getHwcDisplayId());
}

// If a layer is not going to get a release fence because
// it is invisible, but it is also going to release its
// old buffer, add it to the list of layers needing
// fences.
if (hwcLayerDestroyed) {
auto found = std::find(mLayersWithQueuedFrames.cbegin(),
mLayersWithQueuedFrames.cend(), layer);
if (found != mLayersWithQueuedFrames.cend()) {
layersNeedingFences.add(layer);
}
}
});
}
displayDevice->setVisibleLayersSortedByZ(layersSortedByZ);
displayDevice->setLayersNeedingFences(layersNeedingFences);
displayDevice->undefinedRegion.set(bounds);
displayDevice->undefinedRegion.subtractSelf(
tr.transform(opaqueRegion));
displayDevice->dirtyRegion.orSelf(dirtyRegion);
}
}
}

4.4 SF.setUpHWComposer

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
void SurfaceFlinger::setUpHWComposer() {
ATRACE_CALL();
ALOGV("setUpHWComposer");

for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
bool dirty = !mDisplays[dpy]->getDirtyRegion(mRepaintEverything).isEmpty();
bool empty = mDisplays[dpy]->getVisibleLayersSortedByZ().size() == 0;
bool wasEmpty = !mDisplays[dpy]->lastCompositionHadVisibleLayers;

// If nothing has changed (!dirty), don't recompose.
// If something changed, but we don't currently have any visible layers,
// and didn't when we last did a composition, then skip it this time.
// The second rule does two things:
// - When all layers are removed from a display, we'll emit one black
// frame, then nothing more until we get new layers.
// - When a display is created with a private layer stack, we won't
// emit any black frames until a layer is added to the layer stack.
bool mustRecompose = dirty && !(empty && wasEmpty);

ALOGV_IF(mDisplays[dpy]->getDisplayType() == DisplayDevice::DISPLAY_VIRTUAL,
"dpy[%zu]: %s composition (%sdirty %sempty %swasEmpty)", dpy,
mustRecompose ? "doing" : "skipping",
dirty ? "+" : "-",
empty ? "+" : "-",
wasEmpty ? "+" : "-");

mDisplays[dpy]->beginFrame(mustRecompose);

if (mustRecompose) {
mDisplays[dpy]->lastCompositionHadVisibleLayers = !empty;
}
}

// build the h/w work list
if (CC_UNLIKELY(mGeometryInvalid)) {
mGeometryInvalid = false;
for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
sp<const DisplayDevice> displayDevice(mDisplays[dpy]);
const auto hwcId = displayDevice->getHwcDisplayId();
if (hwcId >= 0) {
const Vector<sp<Layer>>& currentLayers(
displayDevice->getVisibleLayersSortedByZ());
setDisplayAnimating(displayDevice);
for (size_t i = 0; i < currentLayers.size(); i++) {
const auto& layer = currentLayers[i];
if (!layer->hasHwcLayer(hwcId)) {
if (!layer->createHwcLayer(getBE().mHwc.get(), hwcId)) {
layer->forceClientComposition(hwcId);
continue;
}
if (layer->isPrimaryDisplayOnly()) {
setLayerAsMask(hwcId, layer->getLayerId());
}
}

layer->setGeometry(displayDevice, i);
if (mDebugDisableHWC || mDebugRegion) {
layer->forceClientComposition(hwcId);
}
}
}
}
}

// Set the per-frame data
// 设置每一帧的数据
for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
auto& displayDevice = mDisplays[displayId];
const auto hwcId = displayDevice->getHwcDisplayId();

if (hwcId < 0) {
continue;
}
if (mDrawingState.colorMatrixChanged) {
displayDevice->setColorTransform(mDrawingState.colorMatrix);
status_t result = getBE().mHwc->setColorTransform(hwcId, mDrawingState.colorMatrix);
ALOGE_IF(result != NO_ERROR, "Failed to set color transform on "
"display %zd: %d", displayId, result);
}
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
if (layer->isHdrY410()) {
layer->forceClientComposition(hwcId);
} else if ((layer->getDataSpace() == Dataspace::BT2020_PQ ||
layer->getDataSpace() == Dataspace::BT2020_ITU_PQ) &&
!displayDevice->hasHDR10Support()) {
layer->forceClientComposition(hwcId);
} else if ((layer->getDataSpace() == Dataspace::BT2020_HLG ||
layer->getDataSpace() == Dataspace::BT2020_ITU_HLG) &&
!displayDevice->hasHLGSupport()) {
layer->forceClientComposition(hwcId);
}

if (layer->getForceClientComposition(hwcId)) {
ALOGV("[%s] Requesting Client composition", layer->getName().string());
layer->setCompositionType(hwcId, HWC2::Composition::Client);
continue;
}

layer->setPerFrameData(displayDevice);
}

if (hasWideColorDisplay) {
ColorMode colorMode;
Dataspace dataSpace;
RenderIntent renderIntent;
pickColorMode(displayDevice, &colorMode, &dataSpace, &renderIntent);
setActiveColorModeInternal(displayDevice, colorMode, dataSpace, renderIntent);
}
}

mDrawingState.colorMatrixChanged = false;

dumpDrawCycle(true);

for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
auto& displayDevice = mDisplays[displayId];
if (!displayDevice->isDisplayOn()) {
continue;
}

status_t result = displayDevice->prepareFrame(*getBE().mHwc);
ALOGE_IF(result != NO_ERROR, "prepareFrame for display %zd failed:"
" %d (%s)", displayId, result, strerror(-result));
}
}

4.5 SF.doComposition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
void SurfaceFlinger::doComposition() {
ATRACE_CALL();
ALOGV("doComposition");

const bool repaintEverything = android_atomic_and(0, &mRepaintEverything);
for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
const sp<DisplayDevice>& hw(mDisplays[dpy]);
if (hw->isDisplayOn()) {
// transform the dirty region into this screen's coordinate space
// 将脏区域转换为此屏幕的坐标空间
const Region dirtyRegion(hw->getDirtyRegion(repaintEverything));

// repaint the framebuffer (if needed)
// 如果需要,重绘framebuffer
doDisplayComposition(hw, dirtyRegion);

hw->dirtyRegion.clear();
hw->flip();
}
}
postFramebuffer();
}
4.5.1 doDisplayComposition
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
void SurfaceFlinger::doDisplayComposition(
const sp<const DisplayDevice>& displayDevice,
const Region& inDirtyRegion)
{
// We only need to actually compose the display if:
// 1) It is being handled by hardware composer, which may need this to
// keep its virtual display state machine in sync, or
// 2) There is work to be done (the dirty region isn't empty)
bool isHwcDisplay = displayDevice->getHwcDisplayId() >= 0;
if (!isHwcDisplay && inDirtyRegion.isEmpty()) {
ALOGV("Skipping display composition");
return;
}

ALOGV("doDisplayComposition");
if (!doComposeSurfaces(displayDevice)) return;

// swap buffers (presentation)
// 交换buffer,输出图像
displayDevice->swapBuffers(getHwComposer());
}
4.5.2 doComposeSurfaces
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
bool SurfaceFlinger::doComposeSurfaces(const sp<const DisplayDevice>& displayDevice)
{
ALOGV("doComposeSurfaces");

const Region bounds(displayDevice->bounds());
const DisplayRenderArea renderArea(displayDevice);
const auto hwcId = displayDevice->getHwcDisplayId();
const bool hasClientComposition = getBE().mHwc->hasClientComposition(hwcId);
ATRACE_INT("hasClientComposition", hasClientComposition);

bool applyColorMatrix = false;
bool needsEnhancedColorMatrix = false;

if (hasClientComposition) {
ALOGV("hasClientComposition");

Dataspace outputDataspace = Dataspace::UNKNOWN;
if (displayDevice->hasWideColorGamut()) {
outputDataspace = displayDevice->getCompositionDataSpace();
}
getBE().mRenderEngine->setOutputDataSpace(outputDataspace);
getBE().mRenderEngine->setDisplayMaxLuminance(
displayDevice->getHdrCapabilities().getDesiredMaxLuminance());

const bool hasDeviceComposition = getBE().mHwc->hasDeviceComposition(hwcId);
const bool skipClientColorTransform = getBE().mHwc->hasCapability(
HWC2::Capability::SkipClientColorTransform);

mat4 colorMatrix;
applyColorMatrix = !hasDeviceComposition && !skipClientColorTransform;
if (applyColorMatrix) {
colorMatrix = mDrawingState.colorMatrix;
}

// The current enhanced saturation matrix is designed to enhance Display P3,
// thus we only apply this matrix when the render intent is not colorimetric
// and the output color space is Display P3.
needsEnhancedColorMatrix =
(displayDevice->getActiveRenderIntent() >= RenderIntent::ENHANCE &&
outputDataspace == Dataspace::DISPLAY_P3);
if (needsEnhancedColorMatrix) {
colorMatrix *= mEnhancedSaturationMatrix;
}

getRenderEngine().setupColorTransform(colorMatrix);

if (!displayDevice->makeCurrent()) {
ALOGW("DisplayDevice::makeCurrent failed. Aborting surface composition for display %s",
displayDevice->getDisplayName().string());
getRenderEngine().resetCurrentSurface();

// |mStateLock| not needed as we are on the main thread
if(!getDefaultDisplayDeviceLocked()->makeCurrent()) {
ALOGE("DisplayDevice::makeCurrent on default display failed. Aborting.");
}
return false;
}

// Never touch the framebuffer if we don't have any framebuffer layers
// 如果没有framebuffer的layer层,则不需要改变framebuffer
if (hasDeviceComposition) {
// when using overlays, we assume a fully transparent framebuffer
// NOTE: we could reduce how much we need to clear, for instance
// remove where there are opaque FB layers. however, on some
// GPUs doing a "clean slate" clear might be more efficient.
// We'll revisit later if needed.
getBE().mRenderEngine->clearWithColor(0, 0, 0, 0);
} else {
// we start with the whole screen area and remove the scissor part
// we're left with the letterbox region
// (common case is that letterbox ends-up being empty)
const Region letterbox(bounds.subtract(displayDevice->getScissor()));

// compute the area to clear
Region region(displayDevice->undefinedRegion.merge(letterbox));

// screen is already cleared here
if (!region.isEmpty()) {
// can happen with SurfaceView
drawWormhole(displayDevice, region);
}
}

const Rect& bounds(displayDevice->getBounds());
const Rect& scissor(displayDevice->getScissor());
if (scissor != bounds) {
// scissor doesn't match the screen's dimensions, so we
// need to clear everything outside of it and enable
// the GL scissor so we don't draw anything where we shouldn't

// enable scissor for this frame
const uint32_t height = displayDevice->getHeight();
getBE().mRenderEngine->setScissor(scissor.left, height - scissor.bottom,
scissor.getWidth(), scissor.getHeight());
}
}

/*
* and then, render the layers targeted at the framebuffer
*/

ALOGV("Rendering client layers");
const Transform& displayTransform = displayDevice->getTransform();
bool firstLayer = true;
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
const Region clip(bounds.intersect(
displayTransform.transform(layer->visibleRegion)));
ALOGV("Layer: %s", layer->getName().string());
ALOGV(" Composition type: %s",
to_string(layer->getCompositionType(hwcId)).c_str());
if (!clip.isEmpty()) {
switch (layer->getCompositionType(hwcId)) {
case HWC2::Composition::Cursor:
case HWC2::Composition::Device:
case HWC2::Composition::Sideband:
case HWC2::Composition::SolidColor: {
const Layer::State& state(layer->getDrawingState());
if (layer->getClearClientTarget(hwcId) && !firstLayer &&
layer->isOpaque(state) && (layer->getAlpha() == 1.0f)
&& hasClientComposition) {
// never clear the very first layer since we're
// guaranteed the FB is already cleared
layer->clearWithOpenGL(renderArea);
}
break;
}
case HWC2::Composition::Client: {
if ((hwcId < 0) &&
(DisplayUtils::getInstance()->skipColorLayer(layer->getTypeId()))) {
// We are not using h/w composer.
// Skip color (dim) layer for WFD direct streaming.
continue;
}
layer->draw(renderArea, clip);
break;
}
default:
break;
}
} else {
ALOGV(" Skipping for empty clip");
}
firstLayer = false;
}

if (applyColorMatrix || needsEnhancedColorMatrix) {
getRenderEngine().setupColorTransform(mat4());
}

// disable scissor at the end of the frame
getBE().mRenderEngine->disableScissor();
return true;
}
4.5.3 postFramebuffer
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
void SurfaceFlinger::postFramebuffer()
{
ATRACE_CALL();
ALOGV("postFramebuffer");

const nsecs_t now = systemTime();
mDebugInSwapBuffers = now;

for (size_t displayId = 0; displayId < mDisplays.size(); ++displayId) {
auto& displayDevice = mDisplays[displayId];
if (!displayDevice->isDisplayOn()) {
continue;
}
const auto hwcId = displayDevice->getHwcDisplayId();
if (hwcId >= 0) {
getBE().mHwc->presentAndGetReleaseFences(hwcId);
}
displayDevice->onSwapBuffersCompleted();
displayDevice->makeCurrent();
for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
sp<Fence> releaseFence = Fence::NO_FENCE;

// The layer buffer from the previous frame (if any) is released
// by HWC only when the release fence from this frame (if any) is
// signaled. Always get the release fence from HWC first.
auto hwcLayer = layer->getHwcLayer(hwcId);
if (hwcId >= 0) {
releaseFence = getBE().mHwc->getLayerReleaseFence(hwcId, hwcLayer);
}

// If the layer was client composited in the previous frame, we
// need to merge with the previous client target acquire fence.
// Since we do not track that, always merge with the current
// client target acquire fence when it is available, even though
// this is suboptimal.
if (layer->getCompositionType(hwcId) == HWC2::Composition::Client) {
//合成
releaseFence = Fence::merge("LayerRelease", releaseFence,
displayDevice->getClientTargetAcquireFence());
}

layer->onLayerDisplayed(releaseFence);
}

// We've got a list of layers needing fences, that are disjoint with
// displayDevice->getVisibleLayersSortedByZ. The best we can do is to
// supply them with the present fence.
if (!displayDevice->getLayersNeedingFences().isEmpty()) {
sp<Fence> presentFence = getBE().mHwc->getPresentFence(hwcId);
for (auto& layer : displayDevice->getLayersNeedingFences()) {
layer->onLayerDisplayed(presentFence);
}
}

if (hwcId >= 0) {
getBE().mHwc->clearReleaseFences(hwcId);
}
}

mLastSwapBufferTime = systemTime() - now;
mDebugInSwapBuffers = 0;

// |mStateLock| not needed as we are on the main thread
if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY)) {
uint32_t flipCount = getDefaultDisplayDeviceLocked()->getPageFlipCount();
if (flipCount % LOG_FRAME_STATS_PERIOD == 0) {
logFrameStats();
}
}
}

4.6 SF.postComposition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
void SurfaceFlinger::postComposition(nsecs_t refreshStartTime)
{
ATRACE_CALL();
ALOGV("postComposition");

// Release any buffers which were replaced this frame
nsecs_t dequeueReadyTime = systemTime();
for (auto& layer : mLayersWithQueuedFrames) {
layer->releasePendingBuffer(dequeueReadyTime);
}

// |mStateLock| not needed as we are on the main thread
const sp<const DisplayDevice> hw(getDefaultDisplayDeviceLocked());

getBE().mGlCompositionDoneTimeline.updateSignalTimes();
std::shared_ptr<FenceTime> glCompositionDoneFenceTime;
if (hw && getBE().mHwc->hasClientComposition(HWC_DISPLAY_PRIMARY)) {
glCompositionDoneFenceTime =
std::make_shared<FenceTime>(hw->getClientTargetAcquireFence());
getBE().mGlCompositionDoneTimeline.push(glCompositionDoneFenceTime);
} else {
glCompositionDoneFenceTime = FenceTime::NO_FENCE;
}

getBE().mDisplayTimeline.updateSignalTimes();

int disp = getVsyncSource();
sp<Fence> presentFence = (disp != -1) ? getBE().mHwc->getPresentFence(disp) : Fence::NO_FENCE;
auto presentFenceTime = std::make_shared<FenceTime>(presentFence);
getBE().mDisplayTimeline.push(presentFenceTime);

nsecs_t vsyncPhase = mPrimaryDispSync.computeNextRefresh(0);
nsecs_t vsyncInterval = mPrimaryDispSync.getPeriod();

// We use the refreshStartTime which might be sampled a little later than
// when we started doing work for this frame, but that should be okay
// since updateCompositorTiming has snapping logic.
updateCompositorTiming(
vsyncPhase, vsyncInterval, refreshStartTime, presentFenceTime);
CompositorTiming compositorTiming;
{
std::lock_guard<std::mutex> lock(getBE().mCompositorTimingLock);
compositorTiming = getBE().mCompositorTiming;
}

mDrawingState.traverseInZOrder([&](Layer* layer) {
bool frameLatched = layer->onPostComposition(glCompositionDoneFenceTime,
presentFenceTime, compositorTiming);
if (frameLatched) {
recordBufferingStats(layer->getName().string(),
layer->getOccupancyHistory(false));
}
});

if (presentFenceTime->isValid()) {
if (mPrimaryDispSync.addPresentFence(presentFenceTime)) {
enableHardwareVsync();
} else {
disableHardwareVsync(false);
}
}

forceResyncModel();
if (!hasSyncFramework) {
if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY) && hw->isDisplayOn()) {
enableHardwareVsync();
}
}

if (mAnimCompositionPending) {
mAnimCompositionPending = false;

if (presentFenceTime->isValid()) {
mAnimFrameTracker.setActualPresentFence(
std::move(presentFenceTime));
} else if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY)) {
// The HWC doesn't support present fences, so use the refresh
// timestamp instead.
nsecs_t presentTime =
getBE().mHwc->getRefreshTimestamp(HWC_DISPLAY_PRIMARY);
mAnimFrameTracker.setActualPresentTime(presentTime);
}
mAnimFrameTracker.advanceFrame();
}

dumpDrawCycle(false);

mTimeStats.incrementTotalFrames();
if (mHadClientComposition) {
mTimeStats.incrementClientCompositionFrames();
}

if (getBE().mHwc->isConnected(HWC_DISPLAY_PRIMARY) &&
hw->getPowerMode() == HWC_POWER_MODE_OFF) {
return;
}

nsecs_t currentTime = systemTime();
if (mHasPoweredOff) {
mHasPoweredOff = false;
} else {
nsecs_t elapsedTime = currentTime - getBE().mLastSwapTime;
size_t numPeriods = static_cast<size_t>(elapsedTime / vsyncInterval);
if (numPeriods < SurfaceFlingerBE::NUM_BUCKETS - 1) {
getBE().mFrameBuckets[numPeriods] += elapsedTime;
} else {
getBE().mFrameBuckets[SurfaceFlingerBE::NUM_BUCKETS - 1] += elapsedTime;
}
getBE().mTotalTime += elapsedTime;
}
getBE().mLastSwapTime = currentTime;

{
std::lock_guard lock(mTexturePoolMutex);
const size_t refillCount = mTexturePoolSize - mTexturePool.size();
if (refillCount > 0) {
const size_t offset = mTexturePool.size();
mTexturePool.resize(mTexturePoolSize);
getRenderEngine().genTextures(refillCount, mTexturePool.data() + offset);
ATRACE_INT("TexturePoolSize", mTexturePool.size());
}
}
}

五、总结

本文主要介绍了SurfaceFlinger和绘制相关的流程,首先介绍了SurfcaeFlinger的启动过程,后面分析Vsync信号如何进行通知屏幕的绘制,最后分析了屏幕的绘制图像输出流程。

这三个过程主要的线程如下:

  • 主线程“/system/bin/surfaceflinger”: 主线程
  • 线程“EventThread”:EventThread
  • 线程“EventControl”: EventControlThread
  • 线程“DispSync”:DispSyncThread

SurfcaeFlinger的启动过程

1.启动图形处理服务

2.开启线程池,最大binder线程池数的个数为4

3.设置SurfaceFlinger进程为高优先级以及后台调度策略

4.创建SurfaceFlinger,并初始化,启动app和sf的两个EventThread线程

5.注册SurfaceFlinger服务和GpuService

6.启动显示服务,最后执行surfacefinger的run方法

Vsync信号处理

1.如果要接收Vsync信号,则必须先注册,当Vsync信号来时会调用onVsyncReceived方法

2.通过调用DispSyncThread.updateModel中的mCond.signal() 来唤醒DispSyncThread线程;

3.执行EventThread::onVSyncEvent()方法中调用 mCondition.notify_all() 唤醒EventThread线程;

4.在DisplayEventReceiver.sendEvents调用BitTube::sendObjects,当收到数据时会执行MQ.cb_eventReceiver,然后通过handler消息机制进入到SurfaceFlinger主线程,调用SF.onMessageReceived方法

图形输出

1.根据上次绘制的图层是否有更新,来判断是否执行invalidate过程

2.重新每个显示屏所有可见点Layer列表

3.更新HWComposer图层

4.合成所有Layer的图像

5.回调每个Layer的onPostComposition