关于 Apple Live Photo 的探究以及实现

关于 Apple Live Photo 的探究以及实现

看到 Live Photo 的很是感兴趣,想着这到底是一种怎样的数据结构,剪映上的视频转 Live Photo 又是怎样做到的,两者有什么样的不同呢?

在剖析 Live Photo 之前,先还是对常见的图片格式以及适用场景做一个全局的概览。

格式压缩方式透明通道动图支持动画帧率色彩支持文件大小浏览器支持适用场景JPEG有损❌❌-24-bit小✅ 全面照片、图像展示PNG无损✅❌-24/32-bit中等偏大✅ 全面UI图标、截图、透明图GIF有损(调色板)✅(1-bit)✅最多 50fps8-bit (256色)小✅ 全面简单动图、表情包WebP有损 / 无损✅✅高帧率支持24-bit + alpha更小✅ 新浏览器全面支持网页图像、动画、表情HEIC有损 / 无损(HEVC)✅❌-高达 12-bit极小✅ Safari/iOS/macOSiPhone照片、Live PhotoAVIF有损 / 无损(AV1)✅✅高帧率支持10/12-bit HDR极小✅ 新浏览器支持新一代网页图像格式BMP无压缩 / 简单压缩❌❌-24-bit非常大❌ 几乎不用Windows内部图像格式(过时)TIFF无损 / 可选压缩✅❌-最多 48-bit极大❌ 浏览器不支持扫描、出版、专业图像存储SVG矢量图✅部分支持通过 CSS/JS 控制无限缩放极小✅ 全面图标、图形、响应式设计PDF复合格式✅❌-多格式封装中等✅ 部分支持文档预览、打印、图文混排

然后重点看下 .HEIC (High Efficiency Image Container)这种图片格式。

What is a HEIC file?

When you take a photo on your IOS Device like iPhone or iPad, the image – or images, in the case of dual cameras and live photos – is saved as an HEIC file. HEIC stands for High Efficiency Image Container. The format is an updated variant of the High Efficiency Image Format(HEIF), traditionally used by Apple across its mobile devices. Apple uses HEIC for HEIF images. HEIF essentially saves images in higher quality than JPEG while also using less space thanks to advanced compressive technology. This space-saving file format uses High Efficiency Video Coding (HEVC) to compress and store images on device drives — taking up half the space of other image file formats, like JPEGs. Each HEIC file uses the .heic or .heics file extension, depending on the number of images inside. The file also contains the relevant metadata describing each image’s size, resolution, location, and more. The HEIC file format is the RAW file format in the Apple ecosystem.

HEIC 文件格式在2017年的 iOS11 和 macOS High Sierra 之后就开始作为苹果的图片标准存储格式了。至于为什么会选择 HEIC格式,更多的是基于技术优势与用户需求的平衡

核心的考虑可能包含以下这几部分:

1) 文件大小显著更小(压缩率更高) HEIC 是基于 HEVC(H.265) 编码的图像容器,相比 JPEG: • 在相同画质下文件体积可减少约 40%-60%; • 在相同文件大小下,画质显著优于 JPEG; • 对于手机拍照生成的 4K/1200 万像素图像,这种节省非常关键;

Apple 的目标之一是 提高照片质量的同时节省设备空间,尤其是 64GB/128GB 起步容量的 iPhone。

2)支持更多图像特性(透明、深度、HDR 等) HEIC 作为现代图像容器,远比 JPEG 更强大:

特性JPEGHEICAlpha 透明通道❌✅多帧/多图支持❌✅动图(如 Live Photo)❌✅深度图像/景深数据❌✅HDR 10-bit / 12-bit 支持❌✅

尤其对于 Apple 的 Live Photo、人像模式、深度感知相机等特性,HEIC 可以在一个文件中容纳:主图、缩略图、深度图、蒙版图、图像属性和颜色配置(ICC Profile),这些在 JPEG 中是无法原生支持的。

3)对 Apple 自家硬件解码友好(专用芯片支持) • Apple 从 A9 芯片(iPhone 6s)开始就已经内置了 HEVC 的硬件解码器; • HEIC 解码对系统资源消耗低,能耗更小、速度更快; • Apple 生态高度可控(软硬件协同),无需担心兼容问题; • 对 Live Photo、iCloud 照片图库、AirDrop 传输都进行了深度优化。

4)为未来多媒体标准铺路(与视频 HEVC 统一) • HEIC 使用的 HEVC 编码技术,与 Apple 大量使用的视频标准 HEVC(H.265)一致; • 统一编码器可以节省系统资源、优化开发流程; • 为后续可能的 HEVC/AV1/AVIF 等格式转换提供便利;

这也让 macOS/iOS 系统具备处理图像序列、动态图像(动图)、视频缩略图等复合场景的能力。

5)为 iCloud 优化(节省带宽和存储) • Apple 云服务中的照片(尤其是开启 iCloud 照片图库)通常占用大量存储; • HEIC 可显著降低云存储压力、用户流量成本; • 这对 Apple 的服务收入增长策略是重要支撑。

6)技术趋势 & 规范支持 • HEIC 是 ISO/IEC 国际标准的一部分(HEIF: ISO/IEC 23008-12); • 不依赖 Apple 专利,技术上开放(虽然 HEVC 编码器涉及专利); • 已被其他厂商采用,如三星、小米、索尼相机也支持 HEIC; • 支持多种扩展,如 AVIF(基于 AV1 的图片格式)也是 HEIF 的变种。

讲完 HEIC 与其他常见图像编码格式的对比和 HEIC 的优点之后,我们来查看一下 .HEIC 文件的数据结构。

用 iPhone 手机拍摄一张 Live Photo(IMG_3568.HEIC),Air Drop 到电脑上查看文件信息,如下:

(base) ➜ ~ ffprobe /Users/aaron/Downloads/IMG_3568.HEIC

ffprobe version 7.1 Copyright (c) 2007-2024 the FFmpeg developers

built with Apple clang version 16.0.0 (clang-1600.0.26.4)

configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/7.1_3 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon

libavutil 59. 39.100 / 59. 39.100

libavcodec 61. 19.100 / 61. 19.100

libavformat 61. 7.100 / 61. 7.100

libavdevice 61. 3.100 / 61. 3.100

libavfilter 10. 4.100 / 10. 4.100

libswscale 8. 3.100 / 8. 3.100

libswresample 5. 3.100 / 5. 3.100

libpostproc 58. 3.100 / 58. 3.100

[mov,mp4,m4a,3gp,3g2,mj2 @ 0x152604e70] Derived Image item of type tmap is not implemented. Update your FFmpeg version to the newest one from Git. If the problem still occurs, it means that your file has a feature which has not been implemented.

Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/Users/aaron/Downloads/IMG_3568.HEIC':

Metadata:

major_brand : heic

minor_version : 0

compatible_brands: mif1MiHBMiHAheixMiHEMiPrheicmiaftmap

Duration: N/A, start: 0.000000, bitrate: N/A

Stream group #0:0[0x31]: Tile Grid: hevc (Main Still Picture) (hvc1 / 0x31637668), yuvj420p(pc, smpte170m/smpte432/bt709), 4032x3024 (default)

Stream group #0:1[0x58]: Tile Grid: hevc (Main 10) (hvc1 / 0x31637668), yuv420p10le(pc, smpte170m/smpte432/linear), 2880x2160

Stream group #0:2[0x65]: Tile Grid: hevc (Rext) (hvc1 / 0x31637668), gray(pc), 2016x1512

Stream #0:48[0x32]: Video: hevc (Main 10) (hvc1 / 0x31637668), yuv420p10le(pc), 1024x768, 1 fps, 1 tbr, 1 tbn

Stream #0:49[0x33]: Video: hevc (Rext) (hvc1 / 0x31637668), gray(pc), 2016x1512, 1 fps, 1 tbr, 1 tbn

Stream #0:50[0x35]: Video: hevc (Rext) (hvc1 / 0x31637668), gray(pc), 2016x1512, 1 fps, 1 tbr, 1 tbn

Stream #0:51[0x37]: Video: hevc (Rext) (hvc1 / 0x31637668), gray(pc), 2016x1512, 1 fps, 1 tbr, 1 tbn

Stream #0:52[0x39]: Video: hevc (Main Still Picture) (hvc1 / 0x31637668), yuvj420p(pc, smpte170m/smpte432/bt709), 416x312, 1 fps, 1 tbr, 1 tbn

Side data:

ICC Profile

再看看用剪映视频转 Live Photo 的图片(IMG_7009.HEIC)信息:

(base) ➜ ~ ffprobe /Users/aaron/Downloads/IMG_7009.HEIC

ffprobe version 7.1 Copyright (c) 2007-2024 the FFmpeg developers

built with Apple clang version 16.0.0 (clang-1600.0.26.4)

configuration: --prefix=/opt/homebrew/Cellar/ffmpeg/7.1_3 --enable-shared --enable-pthreads --enable-version3 --cc=clang --host-cflags= --host-ldflags='-Wl,-ld_classic' --enable-ffplay --enable-gnutls --enable-gpl --enable-libaom --enable-libaribb24 --enable-libbluray --enable-libdav1d --enable-libharfbuzz --enable-libjxl --enable-libmp3lame --enable-libopus --enable-librav1e --enable-librist --enable-librubberband --enable-libsnappy --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtesseract --enable-libtheora --enable-libvidstab --enable-libvmaf --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libxvid --enable-lzma --enable-libfontconfig --enable-libfreetype --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libspeex --enable-libsoxr --enable-libzmq --enable-libzimg --disable-libjack --disable-indev=jack --enable-videotoolbox --enable-audiotoolbox --enable-neon

libavutil 59. 39.100 / 59. 39.100

libavcodec 61. 19.100 / 61. 19.100

libavformat 61. 7.100 / 61. 7.100

libavdevice 61. 3.100 / 61. 3.100

libavfilter 10. 4.100 / 10. 4.100

libswscale 8. 3.100 / 8. 3.100

libswresample 5. 3.100 / 5. 3.100

libpostproc 58. 3.100 / 58. 3.100

Input #0, jpeg_pipe, from '/Users/aaron/Downloads/IMG_7009.HEIC':

Duration: N/A, bitrate: N/A

Stream #0:0: Video: mjpeg (Baseline), yuvj420p(pc, bt470bg/unknown/unknown), 540x960 [SAR 72:72 DAR 9:16], 25 fps, 25 tbr, 25 tbn

相比之下可以看到转换出来的 Live Photo(IMG_7009.HEIC)比原生 iPhone 拍摄的少了Metadata、Stream group、Stream信息,差异还是蛮大的,但奇怪的是,尽管少了这么多信息轨道,Live Photo 还是能够正常显示,这说明减少的部分是非必须的。

那么 Live Photo 是怎么加载的呢?,多出来的Metadata、Stream group、Stream信息又是干啥用的?

我们再往下看(重点)

Live Photo 是怎么加载的呢?

系统识别 Live Photo 是基于 .heic + .mov + assetIdentifier匹配,所以显示上两者没有差异,但数据结构与底层支持是有显著区别的。

Live Photo 的结构标准

Live Photo 本质上是一种 组合媒体,它由两部分组成:

类型内容作用.HEIC(或.JPG)主图像(包含 metadata)用作照片展示、封面.MOV动态视频部分用于“实况”动画展示

它们通过一个共同的字段 assetIdentifier 绑定在一起。

HEIC/HEIF 结构 HEIC 是基于 ISO Base Media File Format 构建的容器格式(和 MP4 类似),支持:

多张图片(每张图片是一个 item);每张图片可以有不同的属性(主图、辅助图、缩略图、景深图等);使用 HEVC(HEIF)编码;支持多 tile、图层组合、深度图、多视角、ICC profile、HDR等元数据; 这就导致单张 HEIC 文件里,可能有很多图片“子轨”互相关联组成一个完整的照片视图。

Metadata、Stream group、Stream信息又是干啥用的?

Metadata:

major_brand : heic

minor_version : 0

compatible_brands: mif1MiHBMiHAheixMiHEMiPrheicmiaftmap

Stream group #0:0[0x31]: Tile Grid: hevc (Main Still Picture), 4032x3024 ← 主图像(Main Still Picture)

Stream group #0:1[0x58]: Tile Grid: hevc (Main 10), 2880x2160 ← 带景深数据的图像

Stream group #0:2[0x65]: Tile Grid: hevc (Rext), gray, 2016x1512 ← 灰度图层(可能是 alpha mask)

Stream #0:48: hevc (Main 10) ← 主图像

Stream #0:49: hevc (Rext) ← 深度图/Alpha图(可能用于景深)

Stream #0:50~52: hevc (Rext) ← 多图层、缩略图、景深蒙版等

Stream #0:52: hevc (Main Still Picture) ← 缩略图

Metadata 是.HEIC 文件中的 ftyp box(文件类型 box)的字段,属于 ISO Base Media File Format 的一部分。它主要用于标明该文件的品牌信息和兼容性特性,告诉系统和解析器:“我是什么格式,我兼容哪些标准。” 这部分稍微了解一下就好了,我们重点放在 Live Photo 的结构和生成上。

字段含义major_brand: heic主品牌标识,表明这是一个 HEIC 文件(HEIF + HEVC)。也就是使用 HEVC 编码的 HEIF 容器格式。minor_version: 0次版本号,表示格式版本,通常为 0;解析器可忽略该字段。compatible_brands一个品牌兼容性列表,表示该文件可被哪些品牌的解析器识别/解析,按优先级排序,含多个标志。

compatible_brands: mif1MiHBMiHAheixMiHEMiPrheicmiaftmap 中各个标志位解释标志是由 4 个字符组成的代码(FourCC),每一个代表一个功能或兼容集:

标志含义mif1HEIF 文件最基本兼容标志,表示“本文件是最小兼容 HEIF”格式(即基于 ISO Base Media Format 的图像文件)MiHBApple 私有扩展品牌,HEIF + HEVC + Apple 定制(例如 Live Photo)MiHAApple 扩展品牌,表示支持 HEIF 图像的动画或 Alpha 扩展heix表示使用 HEVC 编码图像扩展(HEVC with extensions),支持透明通道、多帧、深度图等MiHEApple 扩展品牌,HEIF + depth map + Portrait 模式MiPrApple 扩展品牌,可能表示“portrait rendering”(人像渲染)或 ProRAW(尚无官方说明)heic标准的 HEIC 格式(HEIF with HEVC),主品牌为 heic 时也需要包含它miafMIAF(Multi-Image Application Format)标准,ISO 定义的复合图像文件格式集合(HEIC 是其一)tmapApple 专用的 Tile Map 扩展(可能用于景深图 / 地图视图 / 图块图像),是 Live Photo、Portrait Depth Map 等 Apple 特有功能的底层支持部分

Stream Group 这是 FFmpeg 在处理复杂 HEIC 文件时的一种表示方式,用于分组属于同一个图像单元的多个轨道(Tiles)或相关图像流。

例如上面的

Stream group #0:0[0x31]: Tile Grid: hevc (Main Still Picture), 4032x3024

Stream group #0:1[0x58]: Tile Grid: hevc (Main 10), 2880x2160

Stream group #0:2[0x65]: Tile Grid: hevc (Rext), gray, 2016x1512

这些 Stream group 分别代表: • #0:0: 主图像(Main Still Picture); • #0:1: 带景深数据的图像; • #0:2: 灰度图层(可能是 alpha mask);

每个 Stream group 下都可以包含一个或多个 Stream(具体 tile 或图层),如:

Stream #0:48: hevc (Main 10), 1024x768

Stream #0:49: hevc (Rext), 2016x1512

...

这些流被 Stream group 抽象成“逻辑组合图像”,便于解码器识别这是一张完整的图片而非碎片。

为什么 Apple 拍摄的 HEIC 会有 Stream group?存在的意义是什么,看着直接使用 Stream 也是能解决上述作用的。

因为 Apple 拍摄的 HEIC: • 可能使用了 Tiling 技术(将图像划分为多个块进行 HEVC 编码); • 包含了多个图层(图像、景深、alpha mask); • 每层是独立的编码轨(Stream); • 为了展示时组合这些图层,系统使用 Stream group 进行逻辑组合。

所以会看到

Stream group #0:0 → 管理多个 tile 的主图

Stream group #0:1 → 管理景深图

Stream group #0:2 → 管理灰度图层(alpha 或 mask)

这样做的好处是: • 解码器能更灵活地选择使用哪些流(是否用景深图); • 提供可编辑能力(如 Portrait 模式、模拟景深); • 让系统和 HEVC 解码器分层解码,节省资源;

更多的,Stream Group 的实际作用主要体现在:

支持高分辨率 tile-based 解码 • 比如 12000x9000 的大图不会用单一 stream 来编码; • 会切成 100 多个 tile,每个是一个 Stream; • Stream group 用来组合这些 tile 为一个图像;

支持图像多视图 • 一个 HEIC 文件中可能包含多张图(不同角度、版本); • 每张图可能有多个图层(主图、alpha、缩略图); • Stream group 帮助识别:哪个主图对应哪个图层;

帮助播放器/图像处理器组织解码 • 节省资源; • 知道应该优先解码哪些 stream; • 对应图像合成逻辑更清晰;

相比之下,普通 HEIC 没有 Stream group?没错,用工具把一张 JPEG 转成 HEIC(比如 ffmpeg 或 libheif),默认就只有一个主图像,没有缩略图、alpha 图层、景深等。因此也没有使用 Stream Group 的必要了。 可以看到剪映转换出的这张 Live Photo (IMG_7009.HEIC)就只有下面一条 Stream 信息

Input #0, jpeg_pipe, ...

Stream #0:0: Video: mjpeg, ...

🔬 补充:FFmpeg 的实现细节

Stream group 实际上并不是 FFmpeg 原生结构,它是由 FFmpeg 的 demuxer(解复用器) 针对复杂容器添加的一种辅助分组,便于解析和输出调试信息。

当 HEIC 文件包含多个图片 item、图层、或者 tile grid 信息时,FFmpeg 会自动把它们标记为一个 Stream group,否则就只是一条 Stream。

Stream 则是媒体容器中的物理轨道(音频、视频、图像等)。

Stream #0:48: hevc (Main 10) ← 主图像

Stream #0:49: hevc (Rext) ← 深度图/Alpha图(可能用于景深)

Stream #0:50~52: hevc (Rext) ← 多图层、缩略图、景深蒙版等

Stream #0:52: hevc (Main Still Picture) ← 缩略图

视频转 Live Photo 实现

在 iOS 上实现「视频转实况照片(Live Photo)」的过程,其实是复现苹果 Live Photo 的文件结构与打包逻辑。Live Photo 是一个由 JPEG 图像 + MOV 视频组合而成的媒体格式,两者通过 相同的 assetIdentifier 进行关联,并使用 PHLivePhoto 或 PHAssetCreationRequest 进行保存。

实现思路: 1. 从视频提取首帧图像,作为 JPEG 静态图; 2. 为 JPEG 和 MOV 文件打上相同的 assetIdentifier; 3. 使用 PHPhotoLibrary 将它们合并为 Live Photo 保存到相册。

提取视频首帧图像

private func generateKeyPhoto(from videoURL: URL) -> URL? {

var percent:Float = 0.5

let videoAsset = AVURLAsset(url: videoURL)

if let stillImageTime = videoAsset.stillImageTime() {

percent = Float(stillImageTime.value) / Float(videoAsset.duration.value)

}

guard let imageFrame = videoAsset.getAssetFrame(percent: percent) else { return nil }

guard let jpegData = UIImageJPEGRepresentation(imageFrame, 1.0) else { return nil }

guard let url = cacheDirectory?.appendingPathComponent(UUID().uuidString).appendingPathExtension("jpg") else { return nil }

do {

try? jpegData.write(to: url)

return url

}

}

//返回视频中作为“主图像帧”的时间点

func stillImageTime() -> CMTime? {

var stillTime:CMTime? = nil

if let videoReader = try? AVAssetReader(asset: self) {

if let metadataTrack = self.tracks(withMediaType: .metadata).first {

let videoReaderOutput = AVAssetReaderTrackOutput(track: metadataTrack, outputSettings: nil)

videoReader.add(videoReaderOutput)

videoReader.startReading()

let keyStillImageTime = "com.apple.quicktime.still-image-time" //是 Apple 在 Live Photo 中用于指示哪一帧作为静态图的元数据 key;

let keySpaceQuickTimeMetadata = "mdta" //是 QuickTime 的 metadata 命名空间;

var found = false

while found == false {

if let sampleBuffer = videoReaderOutput.copyNextSampleBuffer() {

if CMSampleBufferGetNumSamples(sampleBuffer) != 0 {

let group = AVTimedMetadataGroup(sampleBuffer: sampleBuffer)

for item in group?.items ?? [] {

if item.key as? String == keyStillImageTime && item.keySpace!.rawValue == keySpaceQuickTimeMetadata {

stillTime = group?.timeRange.start

//print("stillImageTime = \(CMTimeGetSeconds(stillTime!))")

found = true

break

}

}

}

}

else {

break;

}

}

videoReader.cancelReading()

}

}

return stillTime

}

func getAssetFrame(percent:Float) -> UIImage?

{

let imageGenerator = AVAssetImageGenerator(asset: self)

imageGenerator.appliesPreferredTrackTransform = true

imageGenerator.requestedTimeToleranceAfter = CMTimeMake(1,100)

imageGenerator.requestedTimeToleranceBefore = CMTimeMake(1,100)

var time = self.duration

time.value = Int64(Float(time.value) * percent)

do {

var actualTime = kCMTimeZero

let imageRef = try imageGenerator.copyCGImage(at: time, actualTime:&actualTime)

let img = UIImage(cgImage: imageRef)

return img

}

catch let error as NSError

{

print("Image generation failed with error \(error)")

return nil

}

}

为 JPEG 和 MOV 文件打上相同的 assetIdentifier

给图片添加 UUID().uuidString 这个 assetIdentifier。

let assetIdentifier = UUID().uuidString

let _keyPhotoURL = imageURL ?? generateKeyPhoto(from: videoURL)

guard let keyPhotoURL = _keyPhotoURL, let pairedImageURL = addAssetID(assetIdentifier, toImage: keyPhotoURL, saveTo: cacheDirectory.appendingPathComponent(assetIdentifier).appendingPathExtension("jpg")) else {

DispatchQueue.main.async {

completion(nil, nil)

}

return

}

//为图片添加 assetIdentifier

func addAssetID(_ assetIdentifier: String, toImage imageURL: URL, saveTo destinationURL: URL) -> URL? {

guard let imageDestination = CGImageDestinationCreateWithURL(destinationURL as CFURL, kUTTypeJPEG, 1, nil),

let imageSource = CGImageSourceCreateWithURL(imageURL as CFURL, nil),

let imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, nil),

var imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as? [AnyHashable : Any] else { return nil }

let assetIdentifierKey = "17"

let assetIdentifierInfo = [assetIdentifierKey : assetIdentifier]

imageProperties[kCGImagePropertyMakerAppleDictionary] = assetIdentifierInfo

CGImageDestinationAddImage(imageDestination, imageRef, imageProperties as CFDictionary)

CGImageDestinationFinalize(imageDestination)

return destinationURL

}

给视频添加 UUID().uuidString 这个 assetIdentifier。

addAssetID(assetIdentifier, toVideo: videoURL, saveTo: cacheDirectory.appendingPathComponent(assetIdentifier).appendingPathExtension("mov"), progress: progress) { (_videoURL) in

if let pairedVideoURL = _videoURL {

_ = PHLivePhoto.request(withResourceFileURLs: [pairedVideoURL, pairedImageURL], placeholderImage: nil, targetSize: CGSize.zero, contentMode: PHImageContentMode.aspectFit, resultHandler: { (livePhoto: PHLivePhoto?, info: [AnyHashable : Any]) -> Void in

if let isDegraded = info[PHLivePhotoInfoIsDegradedKey] as? Bool, isDegraded {

return

}

DispatchQueue.main.async {

completion(livePhoto, (pairedImageURL, pairedVideoURL))

}

})

} else {

DispatchQueue.main.async {

completion(nil, nil)

}

}

}

//给视频添加 assetIdentifier

func addAssetID(_ assetIdentifier: String, toVideo videoURL: URL, saveTo destinationURL: URL, progress: @escaping (CGFloat) -> Void, completion: @escaping (URL?) -> Void) {

var audioWriterInput: AVAssetWriterInput?

var audioReaderOutput: AVAssetReaderOutput?

let videoAsset = AVURLAsset(url: videoURL)

let frameCount = videoAsset.countFrames(exact: false)

guard let videoTrack = videoAsset.tracks(withMediaType: .video).first else {

completion(nil)

return

}

do {

// Create the Asset Writer

assetWriter = try AVAssetWriter(outputURL: destinationURL, fileType: .mov)

// Create Video Reader Output

videoReader = try AVAssetReader(asset: videoAsset)

let videoReaderSettings = [kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]

let videoReaderOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)

videoReader?.add(videoReaderOutput)

// Create Video Writer Input

let videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : videoTrack.naturalSize.width, AVVideoHeightKey : videoTrack.naturalSize.height])

videoWriterInput.transform = videoTrack.preferredTransform

videoWriterInput.expectsMediaDataInRealTime = true

assetWriter?.add(videoWriterInput)

// Create Audio Reader Output & Writer Input

if let audioTrack = videoAsset.tracks(withMediaType: .audio).first {

do {

let _audioReader = try AVAssetReader(asset: videoAsset)

let _audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)

_audioReader.add(_audioReaderOutput)

audioReader = _audioReader

audioReaderOutput = _audioReaderOutput

let _audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)

_audioWriterInput.expectsMediaDataInRealTime = false

assetWriter?.add(_audioWriterInput)

audioWriterInput = _audioWriterInput

} catch {

print(error)

}

}

else {

audioReader = nil

}

// Create necessary identifier metadata and still image time metadata

let assetIdentifierMetadata = metadataForAssetID(assetIdentifier)

let stillImageTimeMetadataAdapter = createMetadataAdaptorForStillImageTime()

assetWriter?.metadata = [assetIdentifierMetadata]

assetWriter?.add(stillImageTimeMetadataAdapter.assetWriterInput)

// Start the Asset Writer

assetWriter?.startWriting()

assetWriter?.startSession(atSourceTime: kCMTimeZero)

// Add still image metadata

let _stillImagePercent: Float = 0.5

stillImageTimeMetadataAdapter.append(AVTimedMetadataGroup(items: [metadataItemForStillImageTime()],timeRange: videoAsset.makeStillImageTimeRange(percent: _stillImagePercent, inFrameCount: frameCount)))

// For end of writing / progress

var writingVideoFinished = false

var writingAudioFinished = false

var currentFrameCount = 0

func didCompleteWriting() {

guard writingAudioFinished && writingVideoFinished else { return }

assetWriter?.finishWriting {

if self.assetWriter?.status == .completed {

completion(destinationURL)

} else {

completion(nil)

}

}

}

// Start writing video

if videoReader?.startReading() ?? false {

videoWriterInput.requestMediaDataWhenReady(on: DispatchQueue(label: "videoWriterInputQueue")) {

while videoWriterInput.isReadyForMoreMediaData {

if let sampleBuffer = videoReaderOutput.copyNextSampleBuffer() {

currentFrameCount += 1

let percent:CGFloat = CGFloat(currentFrameCount)/CGFloat(frameCount)

progress(percent)

if !videoWriterInput.append(sampleBuffer) {

print("Cannot write: \(String(describing: self.assetWriter?.error?.localizedDescription))")

self.videoReader?.cancelReading()

}

} else {

videoWriterInput.markAsFinished()

writingVideoFinished = true

didCompleteWriting()

}

}

}

} else {

writingVideoFinished = true

didCompleteWriting()

}

// Start writing audio

if audioReader?.startReading() ?? false {

audioWriterInput?.requestMediaDataWhenReady(on: DispatchQueue(label: "audioWriterInputQueue")) {

while audioWriterInput?.isReadyForMoreMediaData ?? false {

guard let sampleBuffer = audioReaderOutput?.copyNextSampleBuffer() else {

audioWriterInput?.markAsFinished()

writingAudioFinished = true

didCompleteWriting()

return

}

audioWriterInput?.append(sampleBuffer)

}

}

} else {

writingAudioFinished = true

didCompleteWriting()

}

} catch {

print(error)

completion(nil)

}

}

使用 PHPhotoLibrary 将它们合并为 Live Photo 保存到相册

/// Save a Live Photo to the Photo Library by passing the paired image and video.

public class func saveToLibrary(_ resources: LivePhotoResources, completion: @escaping (Bool) -> Void) {

PHPhotoLibrary.shared().performChanges({

let creationRequest = PHAssetCreationRequest.forAsset()

let options = PHAssetResourceCreationOptions()

creationRequest.addResource(with: PHAssetResourceType.pairedVideo, fileURL: resources.pairedVideo, options: options)

creationRequest.addResource(with: PHAssetResourceType.photo, fileURL: resources.pairedImage, options: options)

}, completionHandler: { (success, error) in

if error != nil {

print(error as Any)

}

completion(success)

})

}

最后完整的代码如下:

//

// LivePhoto.swift

// Live Photos

import UIKit

import AVFoundation

import MobileCoreServices

import Photos

class LivePhoto {

// MARK: PUBLIC

typealias LivePhotoResources = (pairedImage: URL, pairedVideo: URL)

/// Returns the paired image and video for the given PHLivePhoto

public class func extractResources(from livePhoto: PHLivePhoto, completion: @escaping (LivePhotoResources?) -> Void) {

queue.async {

shared.extractResources(from: livePhoto, completion: completion)

}

}

/// Generates a PHLivePhoto from an image and video. Also returns the paired image and video.

public class func generate(from imageURL: URL?, videoURL: URL, progress: @escaping (CGFloat) -> Void, completion: @escaping (PHLivePhoto?, LivePhotoResources?) -> Void) {

queue.async {

shared.generate(from: imageURL, videoURL: videoURL, progress: progress, completion: completion)

}

}

/// Save a Live Photo to the Photo Library by passing the paired image and video.

public class func saveToLibrary(_ resources: LivePhotoResources, completion: @escaping (Bool) -> Void) {

PHPhotoLibrary.shared().performChanges({

let creationRequest = PHAssetCreationRequest.forAsset()

let options = PHAssetResourceCreationOptions()

creationRequest.addResource(with: PHAssetResourceType.pairedVideo, fileURL: resources.pairedVideo, options: options)

creationRequest.addResource(with: PHAssetResourceType.photo, fileURL: resources.pairedImage, options: options)

}, completionHandler: { (success, error) in

if error != nil {

print(error as Any)

}

completion(success)

})

}

// MARK: PRIVATE

private static let shared = LivePhoto()

private static let queue = DispatchQueue(label: "com.limit-point.LivePhotoQueue", attributes: .concurrent)

lazy private var cacheDirectory: URL? = {

if let cacheDirectoryURL = try? FileManager.default.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: false) {

let fullDirectory = cacheDirectoryURL.appendingPathComponent("com.limit-point.LivePhoto", isDirectory: true)

if !FileManager.default.fileExists(atPath: fullDirectory.absoluteString) {

try? FileManager.default.createDirectory(at: fullDirectory, withIntermediateDirectories: true, attributes: nil)

}

return fullDirectory

}

return nil

}()

deinit {

clearCache()

}

private func generateKeyPhoto(from videoURL: URL) -> URL? {

var percent:Float = 0.5

let videoAsset = AVURLAsset(url: videoURL)

if let stillImageTime = videoAsset.stillImageTime() {

percent = Float(stillImageTime.value) / Float(videoAsset.duration.value)

}

guard let imageFrame = videoAsset.getAssetFrame(percent: percent) else { return nil }

guard let jpegData = UIImageJPEGRepresentation(imageFrame, 1.0) else { return nil }

guard let url = cacheDirectory?.appendingPathComponent(UUID().uuidString).appendingPathExtension("jpg") else { return nil }

do {

try? jpegData.write(to: url)

return url

}

}

private func clearCache() {

if let cacheDirectory = cacheDirectory {

try? FileManager.default.removeItem(at: cacheDirectory)

}

}

private func generate(from imageURL: URL?, videoURL: URL, progress: @escaping (CGFloat) -> Void, completion: @escaping (PHLivePhoto?, LivePhotoResources?) -> Void) {

guard let cacheDirectory = cacheDirectory else {

DispatchQueue.main.async {

completion(nil, nil)

}

return

}

let assetIdentifier = UUID().uuidString

let _keyPhotoURL = imageURL ?? generateKeyPhoto(from: videoURL)

guard let keyPhotoURL = _keyPhotoURL, let pairedImageURL = addAssetID(assetIdentifier, toImage: keyPhotoURL, saveTo: cacheDirectory.appendingPathComponent(assetIdentifier).appendingPathExtension("jpg")) else {

DispatchQueue.main.async {

completion(nil, nil)

}

return

}

addAssetID(assetIdentifier, toVideo: videoURL, saveTo: cacheDirectory.appendingPathComponent(assetIdentifier).appendingPathExtension("mov"), progress: progress) { (_videoURL) in

if let pairedVideoURL = _videoURL {

_ = PHLivePhoto.request(withResourceFileURLs: [pairedVideoURL, pairedImageURL], placeholderImage: nil, targetSize: CGSize.zero, contentMode: PHImageContentMode.aspectFit, resultHandler: { (livePhoto: PHLivePhoto?, info: [AnyHashable : Any]) -> Void in

if let isDegraded = info[PHLivePhotoInfoIsDegradedKey] as? Bool, isDegraded {

return

}

DispatchQueue.main.async {

completion(livePhoto, (pairedImageURL, pairedVideoURL))

}

})

} else {

DispatchQueue.main.async {

completion(nil, nil)

}

}

}

}

private func extractResources(from livePhoto: PHLivePhoto, to directoryURL: URL, completion: @escaping (LivePhotoResources?) -> Void) {

let assetResources = PHAssetResource.assetResources(for: livePhoto)

let group = DispatchGroup()

var keyPhotoURL: URL?

var videoURL: URL?

for resource in assetResources {

let buffer = NSMutableData()

let options = PHAssetResourceRequestOptions()

options.isNetworkAccessAllowed = true

group.enter()

PHAssetResourceManager.default().requestData(for: resource, options: options, dataReceivedHandler: { (data) in

buffer.append(data)

}) { (error) in

if error == nil {

if resource.type == .pairedVideo {

videoURL = self.saveAssetResource(resource, to: directoryURL, resourceData: buffer as Data)

} else {

keyPhotoURL = self.saveAssetResource(resource, to: directoryURL, resourceData: buffer as Data)

}

} else {

print(error as Any)

}

group.leave()

}

}

group.notify(queue: DispatchQueue.main) {

guard let pairedPhotoURL = keyPhotoURL, let pairedVideoURL = videoURL else {

completion(nil)

return

}

completion((pairedPhotoURL, pairedVideoURL))

}

}

private func extractResources(from livePhoto: PHLivePhoto, completion: @escaping (LivePhotoResources?) -> Void) {

if let cacheDirectory = cacheDirectory {

extractResources(from: livePhoto, to: cacheDirectory, completion: completion)

}

}

private func saveAssetResource(_ resource: PHAssetResource, to directory: URL, resourceData: Data) -> URL? {

let fileExtension = UTTypeCopyPreferredTagWithClass(resource.uniformTypeIdentifier as CFString,kUTTagClassFilenameExtension)?.takeRetainedValue()

guard let ext = fileExtension else {

return nil

}

var fileUrl = directory.appendingPathComponent(NSUUID().uuidString)

fileUrl = fileUrl.appendingPathExtension(ext as String)

do {

try resourceData.write(to: fileUrl, options: [Data.WritingOptions.atomic])

} catch {

print("Could not save resource \(resource) to filepath \(String(describing: fileUrl))")

return nil

}

return fileUrl

}

func addAssetID(_ assetIdentifier: String, toImage imageURL: URL, saveTo destinationURL: URL) -> URL? {

guard let imageDestination = CGImageDestinationCreateWithURL(destinationURL as CFURL, kUTTypeJPEG, 1, nil),

let imageSource = CGImageSourceCreateWithURL(imageURL as CFURL, nil),

let imageRef = CGImageSourceCreateImageAtIndex(imageSource, 0, nil),

var imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as? [AnyHashable : Any] else { return nil }

let assetIdentifierKey = "17"

let assetIdentifierInfo = [assetIdentifierKey : assetIdentifier]

imageProperties[kCGImagePropertyMakerAppleDictionary] = assetIdentifierInfo

CGImageDestinationAddImage(imageDestination, imageRef, imageProperties as CFDictionary)

CGImageDestinationFinalize(imageDestination)

return destinationURL

}

var audioReader: AVAssetReader?

var videoReader: AVAssetReader?

var assetWriter: AVAssetWriter?

func addAssetID(_ assetIdentifier: String, toVideo videoURL: URL, saveTo destinationURL: URL, progress: @escaping (CGFloat) -> Void, completion: @escaping (URL?) -> Void) {

var audioWriterInput: AVAssetWriterInput?

var audioReaderOutput: AVAssetReaderOutput?

let videoAsset = AVURLAsset(url: videoURL)

let frameCount = videoAsset.countFrames(exact: false)

guard let videoTrack = videoAsset.tracks(withMediaType: .video).first else {

completion(nil)

return

}

do {

// Create the Asset Writer

assetWriter = try AVAssetWriter(outputURL: destinationURL, fileType: .mov)

// Create Video Reader Output

videoReader = try AVAssetReader(asset: videoAsset)

let videoReaderSettings = [kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]

let videoReaderOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)

videoReader?.add(videoReaderOutput)

// Create Video Writer Input

let videoWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: [AVVideoCodecKey : AVVideoCodecH264, AVVideoWidthKey : videoTrack.naturalSize.width, AVVideoHeightKey : videoTrack.naturalSize.height])

videoWriterInput.transform = videoTrack.preferredTransform

videoWriterInput.expectsMediaDataInRealTime = true

assetWriter?.add(videoWriterInput)

// Create Audio Reader Output & Writer Input

if let audioTrack = videoAsset.tracks(withMediaType: .audio).first {

do {

let _audioReader = try AVAssetReader(asset: videoAsset)

let _audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)

_audioReader.add(_audioReaderOutput)

audioReader = _audioReader

audioReaderOutput = _audioReaderOutput

let _audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)

_audioWriterInput.expectsMediaDataInRealTime = false

assetWriter?.add(_audioWriterInput)

audioWriterInput = _audioWriterInput

} catch {

print(error)

}

}

else {

audioReader = nil

}

// Create necessary identifier metadata and still image time metadata

let assetIdentifierMetadata = metadataForAssetID(assetIdentifier)

let stillImageTimeMetadataAdapter = createMetadataAdaptorForStillImageTime()

assetWriter?.metadata = [assetIdentifierMetadata]

assetWriter?.add(stillImageTimeMetadataAdapter.assetWriterInput)

// Start the Asset Writer

assetWriter?.startWriting()

assetWriter?.startSession(atSourceTime: kCMTimeZero)

// Add still image metadata

let _stillImagePercent: Float = 0.5

stillImageTimeMetadataAdapter.append(AVTimedMetadataGroup(items: [metadataItemForStillImageTime()],timeRange: videoAsset.makeStillImageTimeRange(percent: _stillImagePercent, inFrameCount: frameCount)))

// For end of writing / progress

var writingVideoFinished = false

var writingAudioFinished = false

var currentFrameCount = 0

func didCompleteWriting() {

guard writingAudioFinished && writingVideoFinished else { return }

assetWriter?.finishWriting {

if self.assetWriter?.status == .completed {

completion(destinationURL)

} else {

completion(nil)

}

}

}

// Start writing video

if videoReader?.startReading() ?? false {

videoWriterInput.requestMediaDataWhenReady(on: DispatchQueue(label: "videoWriterInputQueue")) {

while videoWriterInput.isReadyForMoreMediaData {

if let sampleBuffer = videoReaderOutput.copyNextSampleBuffer() {

currentFrameCount += 1

let percent:CGFloat = CGFloat(currentFrameCount)/CGFloat(frameCount)

progress(percent)

if !videoWriterInput.append(sampleBuffer) {

print("Cannot write: \(String(describing: self.assetWriter?.error?.localizedDescription))")

self.videoReader?.cancelReading()

}

} else {

videoWriterInput.markAsFinished()

writingVideoFinished = true

didCompleteWriting()

}

}

}

} else {

writingVideoFinished = true

didCompleteWriting()

}

// Start writing audio

if audioReader?.startReading() ?? false {

audioWriterInput?.requestMediaDataWhenReady(on: DispatchQueue(label: "audioWriterInputQueue")) {

while audioWriterInput?.isReadyForMoreMediaData ?? false {

guard let sampleBuffer = audioReaderOutput?.copyNextSampleBuffer() else {

audioWriterInput?.markAsFinished()

writingAudioFinished = true

didCompleteWriting()

return

}

audioWriterInput?.append(sampleBuffer)

}

}

} else {

writingAudioFinished = true

didCompleteWriting()

}

} catch {

print(error)

completion(nil)

}

}

private func metadataForAssetID(_ assetIdentifier: String) -> AVMetadataItem {

let item = AVMutableMetadataItem()

let keyContentIdentifier = "com.apple.quicktime.content.identifier"

let keySpaceQuickTimeMetadata = "mdta"

item.key = keyContentIdentifier as (NSCopying & NSObjectProtocol)?

item.keySpace = AVMetadataKeySpace(rawValue: keySpaceQuickTimeMetadata)

item.value = assetIdentifier as (NSCopying & NSObjectProtocol)?

item.dataType = "com.apple.metadata.datatype.UTF-8"

return item

}

private func createMetadataAdaptorForStillImageTime() -> AVAssetWriterInputMetadataAdaptor {

let keyStillImageTime = "com.apple.quicktime.still-image-time"

let keySpaceQuickTimeMetadata = "mdta"

let spec : NSDictionary = [

kCMMetadataFormatDescriptionMetadataSpecificationKey_Identifier as NSString:

"\(keySpaceQuickTimeMetadata)/\(keyStillImageTime)",

kCMMetadataFormatDescriptionMetadataSpecificationKey_DataType as NSString:

"com.apple.metadata.datatype.int8" ]

var desc : CMFormatDescription? = nil

CMMetadataFormatDescriptionCreateWithMetadataSpecifications(kCFAllocatorDefault, kCMMetadataFormatType_Boxed, [spec] as CFArray, &desc)

let input = AVAssetWriterInput(mediaType: .metadata,

outputSettings: nil, sourceFormatHint: desc)

return AVAssetWriterInputMetadataAdaptor(assetWriterInput: input)

}

private func metadataItemForStillImageTime() -> AVMetadataItem {

let item = AVMutableMetadataItem()

let keyStillImageTime = "com.apple.quicktime.still-image-time"

let keySpaceQuickTimeMetadata = "mdta"

item.key = keyStillImageTime as (NSCopying & NSObjectProtocol)?

item.keySpace = AVMetadataKeySpace(rawValue: keySpaceQuickTimeMetadata)

item.value = 0 as (NSCopying & NSObjectProtocol)?

item.dataType = "com.apple.metadata.datatype.int8"

return item

}

}

fileprivate extension AVAsset {

func countFrames(exact:Bool) -> Int {

var frameCount = 0

if let videoReader = try? AVAssetReader(asset: self) {

if let videoTrack = self.tracks(withMediaType: .video).first {

frameCount = Int(CMTimeGetSeconds(self.duration) * Float64(videoTrack.nominalFrameRate))

if exact {

frameCount = 0

let videoReaderOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: nil)

videoReader.add(videoReaderOutput)

videoReader.startReading()

// count frames

while true {

let sampleBuffer = videoReaderOutput.copyNextSampleBuffer()

if sampleBuffer == nil {

break

}

frameCount += 1

}

videoReader.cancelReading()

}

}

}

return frameCount

}

//返回视频中作为“主图像帧”的时间点

func stillImageTime() -> CMTime? {

var stillTime:CMTime? = nil

if let videoReader = try? AVAssetReader(asset: self) {

if let metadataTrack = self.tracks(withMediaType: .metadata).first {

let videoReaderOutput = AVAssetReaderTrackOutput(track: metadataTrack, outputSettings: nil)

videoReader.add(videoReaderOutput)

videoReader.startReading()

let keyStillImageTime = "com.apple.quicktime.still-image-time" //是 Apple 在 Live Photo 中用于指示哪一帧作为静态图的元数据 key;

let keySpaceQuickTimeMetadata = "mdta" //是 QuickTime 的 metadata 命名空间;

var found = false

while found == false {

if let sampleBuffer = videoReaderOutput.copyNextSampleBuffer() {

if CMSampleBufferGetNumSamples(sampleBuffer) != 0 {

let group = AVTimedMetadataGroup(sampleBuffer: sampleBuffer)

for item in group?.items ?? [] {

if item.key as? String == keyStillImageTime && item.keySpace!.rawValue == keySpaceQuickTimeMetadata {

stillTime = group?.timeRange.start

//print("stillImageTime = \(CMTimeGetSeconds(stillTime!))")

found = true

break

}

}

}

}

else {

break;

}

}

videoReader.cancelReading()

}

}

return stillTime

}

func makeStillImageTimeRange(percent:Float, inFrameCount:Int = 0) -> CMTimeRange {

var time = self.duration

var frameCount = inFrameCount

if frameCount == 0 {

frameCount = self.countFrames(exact: true)

}

let frameDuration = Int64(Float(time.value) / Float(frameCount))

time.value = Int64(Float(time.value) * percent)

//print("stillImageTime = \(CMTimeGetSeconds(time))")

return CMTimeRangeMake(time, CMTimeMake(frameDuration, time.timescale))

}

func getAssetFrame(percent:Float) -> UIImage?

{

let imageGenerator = AVAssetImageGenerator(asset: self)

imageGenerator.appliesPreferredTrackTransform = true

imageGenerator.requestedTimeToleranceAfter = CMTimeMake(1,100)

imageGenerator.requestedTimeToleranceBefore = CMTimeMake(1,100)

var time = self.duration

time.value = Int64(Float(time.value) * percent)

do {

var actualTime = kCMTimeZero

let imageRef = try imageGenerator.copyCGImage(at: time, actualTime:&actualTime)

let img = UIImage(cgImage: imageRef)

return img

}

catch let error as NSError

{

print("Image generation failed with error \(error)")

return nil

}

}

}

参考链接: HEIC files: How to create, edit and open them Github-LivePhoto Working with Live Photos - In-depth discussion about the Apple Live Photo format,译文:【翻译】使用 Live Photo - 关于 Apple Live Photo 格式的深入讨论 PHLivePhoto — PhotoKit LivePhotosKit JS — LivePhotosKit JS What’s new in camera capture

相关推荐

Dota2在哪个文件?轻松找到游戏文件夹的技巧
在线365bet盘口

Dota2在哪个文件?轻松找到游戏文件夹的技巧

📅 09-27 👁️ 1069
如何选购优质松露
在线365bet盘口

如何选购优质松露

📅 07-25 👁️ 2987
《饥荒海难》中机械马的刷新规律剖析(以饥荒海难发条马刷新间隔为主线)
在线365bet盘口

《饥荒海难》中机械马的刷新规律剖析(以饥荒海难发条马刷新间隔为主线)

📅 09-26 👁️ 8093