小米电视2s音频输出支持音频编码ACC吗

小米电视3外接音箱亲身实践结果分享_小米电视3吧_百度贴吧
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&签到排名:今日本吧第个签到,本吧因你更精彩,明天继续来努力!
本吧签到人数:0可签7级以上的吧50个
本月漏签0次!成为超级会员,赠送8张补签卡连续签到:天&&累计签到:天超级会员单次开通12个月以上,赠送连续签到卡3张
关注:3,027贴子:
小米电视3外接音箱亲身实践结果分享
前情提要我在入手小米电视几天后曾发过一篇。为了保持关注度,避免官方不重视,设置了一部分内容隐藏可见。现在过了一段时间,我还在为这件事纠结。下面具体谈谈为何会产生外接音频的需求,和我所做的尝试。小米电视3的音响效果和输出手段小米电视3主机的背后的接口只有一个低音炮输出,剩下的是USB可以双向通信,无线的连接还有个蓝牙。我当时是在小米之家亲眼看过、听过小米电视3 60寸才下单购买的,所以我对于其音质和输出接口可以说已经了然于胸,当时的想法是,看电视,用这个集成的soundbar的确够了,看电影,效果还差的远,我可以用自己的播放器输出音频到功放,不需要经过小米主机。抱着这样的想法,我下单了。为何不彻底绕过内置解码输出电视装起来,我迫不及待的开始试机,各种4K样片播放流畅,画质艳丽,而且通过千兆有线路由器可以访问我本地的高清资源基本不缓冲,让我对小米主机的视频解码、渲染输出能力大大的赞。然后我开始实行自己的计划,外接播放器和电脑,绕过小米主机。这时出现了两个问题:1,外置设备,如机顶盒、小米盒子、电脑、高清播放机,通过小米主机的HDMI出来,色彩是和主机播放器有较大差距的。画面参数调整成和内置播放一样也没有用,灰蒙蒙的,锐利度、饱和度、对比度大大的下降。2,一些复杂场景,主机播放的流畅度是远胜我的所有播放器的,没有任何丢帧、跳帧现象。而我家里所有的设备,不论是新电脑的软解码还是播放器的硬解码,流畅度都不及小米电视主机,这在以前是没有注意到的。这时,我面对一个跷跷板问题:要么用小米主机的扬声器,牺牲耳朵享受;要么用外置的播放器,放弃内置解码的艳丽和流畅。如果是短期的现象就罢了,作为一个长期问题,还是很难忍,于是我决定继续折腾,为了画质,为了操作上的简便,播放就用小米主机,不输出音频不罢休。S/PDIF篇在小米电视2的时代,音频输出接口是齐全的。虽然有个soundbar可以配,可以说,那时候的小米电视设计师,还没有把自己逼疯。虽然根据soomal的评测,小米电视2的音频还存在SRC问题,但有输出总比没有强。在小米电视2里,设置里是有多个选项的,其中包括一个S/PDIF输出,其实就是2.5mm的接口,可以输出模拟信号和数字信号。小米电视3里,没有任何音频输出选项。在小米盒子里,也是有提供这个接口的,它不但可以输出音频,还能输出视频。就看你怎么用。小米电视3的主机接口,只在2.5mm的接口上写了个低音炮接口,没有提到它可以做模拟和数字输出。我查看电视内置的帮助手册里,有提到这个接口是S/PDIF的,如图。(色块应该是屏幕刷新问题,不必介怀)淘宝卖这个线的很多,于是我立刻入手一条尝试:结果:(为保持官方和广大网友关注,设置了回复可见,请亲帮忙顶起。)本帖隐藏内容先用小米盒子测试了线的质量,结果是OK。然后线插到小米主机2.5mm的接口上,另一端插到功放的输入,结果是无意义的杂讯,主机soundbar仍在输出声音。多次拔插接头,结果不是没声音,就是杂音。尝试用“连接到低音炮”选项来设置输出,结果是“没有找到低音炮”,所以这条线现在在吃灰。USB篇Android从4.0开始,就支持USB OTG设备,意思是Android作为USB主机,来驱动外置USB附件,如鼠标,麦克风,等等。其中有个很重要的功能是:支持USB声卡!网上不乏用各种Android设备来作为数字转盘通过USB输出数字音频,再连接到功放的例子,其中包括小米盒子。USB解码有个好处,就是不会经过主机内置的解码,直接输出音频数据到USB,这样可能避免音质的SRC劣化,还能保持信号的高纯净度,相比SRC后模拟输出,和SRC后再蓝牙压缩传输的更多层劣化,可以说是最理想的方案。说干就干,下面是结果:本帖隐藏的内容我特意在网上找了一家标明支持小米盒子的USB声卡商家,到手后立刻在小米盒子上测试,的确是插上就能出声音的。然后尝试了小米电视3主机,插上毫无反应,关机重启还是没反应,等上一次系统更新了还是没反应。拿去让同事试小米电视2,还是没反应。看来小米电视主机是特意屏蔽了usb声卡?所以现在这USB声卡在吃灰。蓝牙篇蓝牙是我最不想用的办法,毕竟小米主机没有用CSR公司的芯片,不支持高音质的apt-x算法,至于用的到底是最渣的SBC还是没强到哪去的ACC,就无从考证了,但实在没办法了也得用啊,好在我一开始就确认了它的蓝牙输出是可以用的,下面是结果:本帖隐藏的内容我买了个最顶级蓝牙芯片(CSR8645)的蓝牙音频接收器,连接很顺利,输出到功放,总算听到我的音箱出声了,内牛满面。我还特意比较了直接用功放播ape和用小米主机通过蓝牙播ape的区别……虽然的确蓝牙的声音没那么饱满立体,但总算好过了soundbar不少。心想这下尘埃落定了。用纯音乐试音完毕,切到日常使用,傻眼了:看电视机顶盒,声音有0.2-0.3秒的延迟,就是话不跟嘴那种感觉,有点像舞台上的人假唱那种,歌手因为通过广播来对口型,有小小延迟的感觉。看主机内置播放器,问题更严重了:音频有2-3秒的延迟!常常是台词说完了,画面切走了,声音才出来!换了多个视频文件都是如此,关掉蓝牙用小米主机放声音,一点问题都没有。为了寻找这个问题的根源,我尝试了家里其他蓝牙音箱,延迟依旧。难道是小米电视都这样吗?我让同事用小米电视2蓝牙播放,他反馈是毫无延迟,只是每隔几秒断一下。看来这个延迟问题也不是不可解决。所以现在这蓝牙接收器在吃灰。结语我不知道,小米电视的设计师和高层用不用自家的小米电视,我相信,任何一个生活稳定,对影音品质有追求的人,家里都会有或大或小的音响系统,电视作为家庭媒体终端,现在在智能化的趋势下,已经渐渐向中心靠拢,在这种潮流下,对输入输出接口的丰富是怎么都不过分的。就算放在10年前,一般家用电视都有2套甚至更多的音频输出功能。小米因为电视3集成了一个soundbar,就让客户放弃(甚至封死)所有音频扩展的能力,不是明智的做法。小米主机999元独立版,反而有通过HDMI分离音频的独立设置,可以连接次时代功放。而套装版的电视3,那集成了HDMI的MI PORT,反而再一次让人绝望。小米电视3,有着令人依赖的易用的系统,令人满意的视频解码能力,最大的短板就是外置设备的输入画质问题和音频输出,我相信画质问题是暂时的,音频输出也是暂时的,但何时能解决,我期待着。作为一个用户,我能做的都做了,剩下的,就看小米工程师你们的了,以上三个问题,优先级分别是USB -& S/PDIF -& 蓝牙,普及率,S/PDIF -& 蓝牙 -& USB当然如果能全都解决,是最好的了……
「苏宁易购」品牌电视,&正品行货+百城半日达&,支持货到付款,7x24小时1对1贴心服务!买家电,上苏宁易购,质量问题30天包退,365天包换!
意思不能用其他音响链接小米电视3是吗?
意思是,小米电视3 和 他的那个外置长条子,不能out 音频到有源音箱?是这个意思吗? 3.5mm的耳机插口都没有?
很好看看先
我靠,瞬间无爱了,不知道买什么电视好了,我还想接蓝牙音箱,电脑hdmi线连接电视大屏玩游戏呢
这个线的名字叫什么?
可以用外界的印象吗?实打实大师大师的撒大大大的
我买了次时代功放
回来还是连接不上,本来还想试试其他方法的。但是一看楼主都已经把我能想到的都去做了
楼主威武,
我省银子一个个去试了。
望小米重视这个问题,考虑下怎么连接功放音箱
怎么在家里K歌
顺便还有个问题
电视时间长了会有声音延迟,暂停一下就没事,等会就会又延迟
你们有这个现象吗
楼主 蓝牙无线耳机也不能接吗
意思是,小米电视3 和 他的那个外置长条子,不能out 音频到有源音箱?是这个意思吗? 3.5mm的耳机插口都没有?意思是,小米电视3 和 他的那个外置长条子,不能out 音频到有源音箱?是这个意思吗? 3.5mm的耳机插口都没有?
天猫清凉宅家周-家装家电超大优惠,家装满,家电满,高品质,放心购!这个夏天,天猫让你乐不思&暑&,放心购!
数字电视机顶盒接小米主机 效果……别提了
要回复可见??
大家都是买了小米电视3或者即将买的吧友,小米电视3上市之初就标榜“分体”电视,电视主机与屏幕可以!!独立!!升级!!其实根本做不到!现在官方商城根本没有适配小米分体电视3独立的屏幕,以及匹配的主机!希望大家能够联合起来维权,希望家里的电视以后真的能够按照宣传的那样,独立升级硬件!我们不是不支持国货,而是希望小米不要把消费者当做傻子,不要混淆概念去蒙骗消费者!
小米的做法太独,现在是共享的时代,都想让别人买自己的产品,把其它厂家都饿死,可能吗?
用hdmi arc 一點問題都沒有
看看怎么弄
贴吧热议榜
使用签名档&&
保存至快速回贴红米手机 &
电视盒子 &
智能硬件 &
发烧级手机控
扫码下载App一键签到 升级加速
小米电视SPDIF数字音频输出的若干问题总结!!!!
&来自老版论坛
扫一扫!手机看帖更爽
1.小米电视的数字输出是模式采用同轴输出,SPDIF同轴线的线序如下
2.接功放检测到小米电视的数字音频输出编码格式是PCM,48KHz
3.使用自带播放器播放FLAC,APE,WAV等无损音频,输出仍然是PCM,48KHz
4.使用自带播放器播放1080P/AC3,1080P/DTS的视频,输出也是PCM,48KHz
5.观看网络电视(爱家TV,)等软件选择网络电视直播时,无数字音频输出。但选择电影,电视剧等在线视频时,输出是PCM,48KHz
测试还将继续
扫描二维码,手机查看本帖
京ICP证110507号 京ICP备号红米手机 &
电视盒子 &
智能硬件 &
神仙级手机控
楼主的荣誉
扫码下载App一键签到 升级加速
小米电视4 55寸 的音频回放HDMI ARC功能到底能不能用啊
扫一扫!手机看帖更爽
向老板的休息室推荐小米电视4 55寸 老板要求把电视声音输出到功放上,这下丢人了.折腾了两天,硬是没有出声音.还不敢说是电视机有问题,那个功放他买五六年了,我说可能不支持现在的新电视吧,网上一查,结果人家五六年前的东西就已经支持ARC音频回放了.我就不明白了,你弄个接口,硬是不出来声音.什么意思嘛,表示你有个眼,就只是证明下不是人妖吗?官方电话客服都不明白ARC是什么玩意,客服明白无误告诉我有这个功能!那我的电视是不是质量问题,可以退货啊?
扫描二维码,手机查看本帖
解决了吗?我正在选择功放,关注ACR接口。
我也有此问题。arc接功放没有声音。到底我花钱买单的是真arc功能么?请给答复。
感谢您的反馈1、您的功放是什么型号?2、功放使用HDMI(arc)功能需要连接到电视特定的HDMI口,此标志在电视后面有提示3、连接到功放后,需要在设置-声音中设置 声音输出为HDMI(arc)4、可以调整输出格式PCM或者RAW data5、如以上操作还是不行 请私聊我您的联系方式 & &
京ICP证110507号 京ICP备号小米电视机的自带音响和蓝牙能同时输出音频吗_百度知道
小米电视机的自带音响和蓝牙能同时输出音频吗
我有更好的答案
可以。电视优点:翻第二代量子点技术确实出色,画面很自然真实,曲面屏也有不错的临场感,用来玩PS4效果太棒。电视机很讨巧,不过音质却很动听,小小的空间却有大大的能量。总结:如果从一台家用电视的角度来看这台电视很秀,无论从画质音质上来说都很棒。但功能上还是比不上那些大几千的电视,毕竟价位在那里摆着,不能要求太高了。
采纳率:78%
来自团队:
为您推荐:
其他类似问题
换一换
回答问题,赢新手礼包
个人、企业类
违法有害信息,请在下方选择后提交
色情、暴力
我们会通过消息、邮箱等方式尽快将举报结果通知您。使用 AVCaptureSession进行实时采集音视频(YUV、),编码
通过AVCaptureVideoDataOutputSampleBufferDelegate获取到音视频buffer- 数据
分别对音视频原始数据进行编码
ViewController
ViewController.h
H264AACEncode
Created by ZhangWen on 15/10/14.
Copyright (C) 2015年 Zhangwen. All rights reserved.
#import &UIKit/UIKit.h&
#import &AVFoundation/AVFoundation.h&
#import "AACEncoder.h"
#import "H264Encoder.h"
@interface ViewController : UIViewController &AVCaptureVideoDataOutputSampleBufferDelegate,AVCaptureAudioDataOutputSampleBufferDelegate,H264EncoderDelegate&
ViewController.m
H264AACEncode
Created by ZhangWen on 15/10/14.
Copyright (C) 2015年 Zhangwen. All rights reserved.
#import "ViewController.h"
#define CAPTURE_FRAMES_PER_SECOND
#define SAMPLE_RATE
#define VideoWidth
#define VideoHeight
@interface ViewController ()
UIButton *startB
bool startC
H264Encoder *h264E
AACEncoder *aacE
AVCaptureSession *captureS
dispatch_queue_t _audioQ
AVCaptureConnection* _audioC
AVCaptureConnection* _videoC
NSMutableData *_
NSString *h264F
NSFileHandle *fileH
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
startCalled = true;
_data = [[NSMutableData alloc] init];
captureSession = [[AVCaptureSession alloc] init];
[self initStartBtn];
#pragma mark
#pragma mark - 设置音频 capture
- (void) setupAudioCapture {
aacEncoder = [[AACEncoder alloc] init];
// create capture device with video input
* Create audio connection
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
NSError *error =
AVCaptureDeviceInput *audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioDevice error:&error];
if (error) {
NSLog(@"Error getting audio input device: %@", error.description);
if ([captureSession canAddInput:audioInput]) {
[captureSession addInput:audioInput];
_audioQueue = dispatch_queue_create("Audio Capture Queue", DISPATCH_QUEUE_SERIAL);
AVCaptureAudioDataOutput* audioOutput = [[AVCaptureAudioDataOutput alloc] init];
[audioOutput setSampleBufferDelegate:self queue:_audioQueue];
if ([captureSession canAddOutput:audioOutput]) {
[captureSession addOutput:audioOutput];
_audioConnection = [audioOutput connectionWithMediaType:AVMediaTypeAudio];
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for ( AVCaptureDevice *device in devices )
if ( device.position == position )
#pragma mark
#pragma mark - 设置视频 capture
- (void) setupVideoCaprure
h264Encoder = [H264Encoder alloc];
[h264Encoder initWithConfiguration];
NSError *deviceE
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
cameraDevice = [self cameraWithPosition:AVCaptureDevicePositionBack];
cameraDevice.position = AVCaptureDevicePositionB
AVCaptureDeviceInput *inputDevice = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:&deviceError];
// make output device
AVCaptureVideoDataOutput *outputDevice = [[AVCaptureVideoDataOutput alloc] init];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeK
NSNumber* val = [NSNumber
numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarFullRange];
NSDictionary* videoSettings =
[NSDictionary dictionaryWithObject:val forKey:key];
[cameraDevice lockForConfiguration:&error];
if (error == nil) {
NSLog(@"cameraDevice.activeFormat.videoSupportedFrameRateRanges IS %@",[cameraDevice.activeFormat.videoSupportedFrameRateRanges objectAtIndex:<span style="color: #]);
if (cameraDevice.activeFormat.videoSupportedFrameRateRanges){
[cameraDevice setActiveVideoMinFrameDuration:CMTimeMake(<span style="color: #, CAPTURE_FRAMES_PER_SECOND)];
[cameraDevice setActiveVideoMaxFrameDuration:CMTimeMake(<span style="color: #, CAPTURE_FRAMES_PER_SECOND)];
// handle error2
[cameraDevice unlockForConfiguration];
// Start the session running to start the flow of data
outputDevice.videoSettings = videoS
[outputDevice setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// initialize capture session
if ([captureSession canAddInput:inputDevice]) {
[captureSession addInput:inputDevice];
if ([captureSession canAddOutput:outputDevice]) {
[captureSession addOutput:outputDevice];
// begin configuration for the AVCaptureSession
[captureSession beginConfiguration];
// picture resolution
[captureSession setSessionPreset:[NSString stringWithString:AVCaptureSessionPreset640x480]];
_videoConnection = [outputDevice connectionWithMediaType:AVMediaTypeVideo];
//Set landscape (if required)
if ([_videoConnection isVideoOrientationSupported])
AVCaptureVideoOrientation orientation = AVCaptureVideoOrientationLandscapeR
//&&&&&SET VIDEO ORIENTATION IF LANDSCAPE
[_videoConnection setVideoOrientation:orientation];
// make preview layer and add so that camera's view is displayed on screen
NSFileManager *fileManager = [NSFileManager defaultManager];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:<span style="color: #];
h264File = [documentsDirectory stringByAppendingPathComponent:@"test.h264"];
[fileManager removeItemAtPath:h264File error:nil];
[fileManager createFileAtPath:h264File contents:nil attributes:nil];
// Open the file using POSIX as this is anyway a test application
//fd = open([h264File UTF8String], O_RDWR);
fileHandle = [NSFileHandle fileHandleForWritingAtPath:h264File];
[h264Encoder initEncode:VideoWidth height:VideoHeight];
h264Encoder.delegate =
#pragma mark
#pragma mark - sampleBuffer 数据
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
CMTime pts = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
double dPTS = (double)(pts.value) / pts.
NSLog(@"DPTS is %f",dPTS);
if (connection == _videoConnection) {
[h264Encoder encode:sampleBuffer];
} else if (connection == _audioConnection) {
[aacEncoder encodeSampleBuffer:sampleBuffer completionBlock:^(NSData *encodedData, NSError *error) {
if (encodedData) {
NSLog(@"Audio data (%lu): %@", (unsigned long)encodedData.length, encodedData.description);
#pragma mark
#pragma mark -
音频数据(encodedData)
[_data appendData:encodedData];
NSLog(@"Error encoding AAC: %@", error);
#pragma mark
#pragma mark - 视频 sps 和 pps
- (void)gotSpsPps:(NSData*)sps pps:(NSData*)pps
const char bytes[] = "\x00\x00\x00\x01";
size_t length = (sizeof bytes) - <span style="color: #; //string literals have implicit trailing '\0'
NSData *ByteHeader = [NSData dataWithBytes:bytes length:length];
[fileHandle writeData:ByteHeader];
[fileHandle writeData:sps];
[fileHandle writeData:ByteHeader];
[fileHandle writeData:pps];
#pragma mark
#pragma mark - 视频数据回调
- (void)gotEncodedData:(NSData*)data isKeyFrame:(BOOL)isKeyFrame
NSLog(@"Video data (%lu): %@", (unsigned long)data.length, data.description);
if (fileHandle != NULL)
const char bytes[] = "\x00\x00\x00\x01";
size_t length = (sizeof bytes) - <span style="color: #; //string literals have implicit trailing '\0'
NSData *ByteHeader = [NSData dataWithBytes:bytes length:length];
#pragma mark
#pragma mark - 视频数据(data)
[fileHandle writeData:ByteHeader];
//[fileHandle writeData:UnitHeader];
[fileHandle writeData:data];
#pragma mark
#pragma mark - 录制
- (void)startBtnClicked
if (startCalled)
[self startCamera];
startCalled = false;
[startBtn setTitle:@"Stop" forState:UIControlStateNormal];
[startBtn setTitle:@"Start" forState:UIControlStateNormal];
startCalled = true;
[self stopCarmera];
- (void) startCamera
[self setupAudioCapture];
[self setupVideoCaprure];
[captureSession commitConfiguration];
[captureSession startRunning];
- (void) stopCarmera
[h264Encoder End];
[captureSession stopRunning];
//close(fd);
[fileHandle closeFile];
fileHandle = NULL;
// 获取程序Documents目录路径
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:<span style="color: #];
NSMutableString * path = [[NSMutableString alloc]initWithString:documentsDirectory];
[path appendString:@"/AACFile"];
[_data writeToFile:path atomically:YES];
- (void)initStartBtn
startBtn = [UIButton buttonWithType:UIButtonTypeCustom];
startBtn.frame = CGRectMake(<span style="color: #, <span style="color: #, <span style="color: #0, <span style="color: #);
startBtn.center = self.view.
[startBtn addTarget:self action:@selector(startBtnClicked) forControlEvents:UIControlEventTouchUpInside];
[startBtn setTitle:@"Start" forState:UIControlStateNormal];
[startBtn setTitleColor:[UIColor blackColor] forState:UIControlStateNormal];
[self.view addSubview:startBtn];
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
AACEncoder
AACEncoder.h
H264AACEncode
Created by ZhangWen on 15/10/14.
Copyright (C) 2015年 Zhangwen. All rights reserved.
#import &Foundation/Foundation.h&
#import &AVFoundation/AVFoundation.h&
#import &AudioToolbox/AudioToolbox.h&
@interface AACEncoder : NSObject
@property (nonatomic) dispatch_queue_t encoderQ
@property (nonatomic) dispatch_queue_t callbackQ
- (void) encodeSampleBuffer:(CMSampleBufferRef)sampleBuffer completionBlock:(void (^)(NSData *encodedData, NSError* error))completionB
AACEncoder.m
H264AACEncode
Created by ZhangWen on 15/10/14.
Copyright (C) 2015年 Zhangwen. All rights reserved.
#import "AACEncoder.h"
@interface AACEncoder()
@property (nonatomic) AudioConverterRef audioC
@property (nonatomic) uint8_t *aacB
@property (nonatomic) NSUInteger aacBufferS
@property (nonatomic) char *pcmB
@property (nonatomic) size_t pcmBufferS
@implementation AACEncoder
- (void) dealloc {
AudioConverterDispose(_audioConverter);
free(_aacBuffer);
- (id) init {
if (self = [super init]) {
_encoderQueue = dispatch_queue_create("AAC Encoder Queue", DISPATCH_QUEUE_SERIAL);
_callbackQueue = dispatch_queue_create("AAC Encoder Callback Queue", DISPATCH_QUEUE_SERIAL);
_audioConverter = NULL;
_pcmBufferSize = <span style="color: #;
_pcmBuffer = NULL;
_aacBufferSize = <span style="color: #24;
_aacBuffer = malloc(_aacBufferSize * sizeof(uint8_t));
memset(_aacBuffer, <span style="color: #, _aacBufferSize);
- (void) setupEncoderFromSampleBuffer:(CMSampleBufferRef)sampleBuffer {
AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(sampleBuffer));
AudioStreamBasicDescription outAudioStreamBasicDescription = {<span style="color: #}; // Always initialize the fields of a new audio stream basic description structure to zero, as shown here: ...
outAudioStreamBasicDescription.mSampleRate = inAudioStreamBasicDescription.mSampleR // The number of frames per second of the data in the stream, when the stream is played at normal speed. For compressed formats, this field indicates the number of frames per second of equivalent decompressed data. The mSampleRate field must be nonzero, except when this structure is used in a listing of supported formats (see “kAudioStreamAnyRate”).
outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC; // kAudioFormatMPEG4AAC_HE does not work. Can't find `AudioClassDescription`. `mFormatFlags` is set to 0.
outAudioStreamBasicDescription.mFormatFlags = kMPEG4Object_AAC_LC; // Format-specific flags to specify details of the format. Set to 0 to indicate no format flags. See “Audio Data Format Identifiers” for the flags that apply to each format.
outAudioStreamBasicDescription.mBytesPerPacket = <span style="color: #; // The number of bytes in a packet of audio data. To indicate variable packet size, set this field to 0. For a format that uses variable packet size, specify the size of each packet using an AudioStreamPacketDescription structure.
outAudioStreamBasicDescription.mFramesPerPacket = <span style="color: #24; // The number of frames in a packet of audio data. For uncompressed audio, the value is 1. For variable bit-rate formats, the value is a larger fixed number, such as 1024 for AAC. For formats with a variable number of frames per packet, such as Ogg Vorbis, set this field to 0.
outAudioStreamBasicDescription.mBytesPerFrame = <span style="color: #; // The number of bytes from the start of one frame to the start of the next frame in an audio buffer. Set this field to 0 for compressed formats. ...
outAudioStreamBasicDescription.mChannelsPerFrame = <span style="color: #; // The number of channels in each frame of audio data. This value must be nonzero.
outAudioStreamBasicDescription.mBitsPerChannel = <span style="color: #; // ... Set this field to 0 for compressed formats.
outAudioStreamBasicDescription.mReserved = <span style="color: #; // Pads the structure out to force an even 8-byte alignment. Must be set to 0.
AudioClassDescription *description = [self
getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
OSStatus status = AudioConverterNewSpecific(&inAudioStreamBasicDescription, &outAudioStreamBasicDescription, <span style="color: #, description, &_audioConverter);
if (status != <span style="color: #) {
NSLog(@"setup converter: %d", (int)status);
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
static AudioClassD
UInt32 encoderSpecifier =
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
NSLog(@"error getting audio format propery info: %d", (int)(st));
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
descriptions);
NSLog(@"error getting audio format propery: %d", (int)(st));
for (unsigned int i = <span style="color: #; i & i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
static OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
AACEncoder *encoder = (__bridge AACEncoder *)(inUserData);
UInt32 requestedPackets = *ioNumberDataP
//NSLog(@"Number of packets requested: %d", (unsigned int)requestedPackets);
size_t copiedSamples = [encoder copyPCMSamplesIntoBuffer:ioData];
if (copiedSamples & requestedPackets) {
//NSLog(@"PCM buffer isn't full enough!");
*ioNumberDataPackets = <span style="color: #;
return -<span style="color: #;
*ioNumberDataPackets = <span style="color: #;
//NSLog(@"Copied %zu samples into ioData", copiedSamples);
return noE
- (size_t) copyPCMSamplesIntoBuffer:(AudioBufferList*)ioData {
size_t originalBufferSize = _pcmBufferS
if (!originalBufferSize) {
return <span style="color: #;
ioData-&mBuffers[<span style="color: #].mData = _pcmB
ioData-&mBuffers[<span style="color: #].mDataByteSize = _pcmBufferS
_pcmBuffer = NULL;
_pcmBufferSize = <span style="color: #;
return originalBufferS
- (void) encodeSampleBuffer:(CMSampleBufferRef)sampleBuffer completionBlock:(void (^)(NSData * encodedData, NSError* error))completionBlock {
CFRetain(sampleBuffer);
dispatch_async(_encoderQueue, ^{
if (!_audioConverter) {
[self setupEncoderFromSampleBuffer:sampleBuffer];
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
CFRetain(blockBuffer);
OSStatus status = CMBlockBufferGetDataPointer(blockBuffer, <span style="color: #, NULL, &_pcmBufferSize, &_pcmBuffer);
NSError *error =
if (status != kCMBlockBufferNoErr) {
error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
//NSLog(@"PCM Buffer Size: %zu", _pcmBufferSize);
memset(_aacBuffer, <span style="color: #, _aacBufferSize);
AudioBufferList outAudioBufferList = {<span style="color: #};
outAudioBufferList.mNumberBuffers = <span style="color: #;
outAudioBufferList.mBuffers[<span style="color: #].mNumberChannels = <span style="color: #;
outAudioBufferList.mBuffers[<span style="color: #].mDataByteSize = _aacBufferS
outAudioBufferList.mBuffers[<span style="color: #].mData = _aacB
AudioStreamPacketDescription *outPacketDescription = NULL;
UInt32 ioOutputDataPacketSize = <span style="color: #;
status = AudioConverterFillComplexBuffer(_audioConverter, inInputDataProc, (__bridge void *)(self), &ioOutputDataPacketSize, &outAudioBufferList, outPacketDescription);
//NSLog(@"ioOutputDataPacketSize: %d", (unsigned int)ioOutputDataPacketSize);
NSData *data =
if (status == <span style="color: #) {
NSData *rawAAC = [NSData dataWithBytes:outAudioBufferList.mBuffers[<span style="color: #].mData length:outAudioBufferList.mBuffers[<span style="color: #].mDataByteSize];
NSData *adtsHeader = [self adtsDataForPacketLength:rawAAC.length];
NSMutableData *fullData = [NSMutableData dataWithData:adtsHeader];
[fullData appendData:rawAAC];
data = fullD
error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
if (completionBlock) {
dispatch_async(_callbackQueue, ^{
completionBlock(data, error);
CFRelease(sampleBuffer);
CFRelease(blockBuffer);
Add ADTS header at the beginning of each and every AAC packet.
This is needed as MediaCodec encoder generates a packet of raw
Note the packetLen must count in the ADTS header itself.
See: http://wiki.multimedia.cx/index.php?title=ADTS
Also: http://wiki.multimedia.cx/index.php?title=MPEG-4_Audio#Channel_Configurations
- (NSData*) adtsDataForPacketLength:(NSUInteger)packetLength {
int adtsLength = <span style="color: #;
char *packet = malloc(sizeof(char) * adtsLength);
// Variables Recycled by addADTStoPacket
int profile = <span style="color: #;
//<span style="color: #=MediaCodecInfo.CodecProfileLevel.AACObjectELD;
int freqIdx = <span style="color: #;
//<span style="color: #.1KHz
int chanCfg = <span style="color: #;
//MPEG-4 Audio Channel Configuration. 1 Channel front-center
NSUInteger fullLength = adtsLength + packetL
// fill in ADTS data
packet[<span style="color: #] = (char)<span style="color: #xFF; //
= syncword
packet[<span style="color: #] = (char)<span style="color: #xF9; //
= syncword MPEG-2 Layer CRC
packet[<span style="color: #] = (char)(((profile-<span style="color: #)&&<span style="color: #) + (freqIdx&&<span style="color: #) +(chanCfg&&<span style="color: #));
packet[<span style="color: #] = (char)(((chanCfg&<span style="color: #)&&<span style="color: #) + (fullLength&&<span style="color: #));
packet[<span style="color: #] = (char)((fullLength&<span style="color: #x7FF) && <span style="color: #);
packet[<span style="color: #] = (char)(((fullLength&<span style="color: #)&&<span style="color: #) + <span style="color: #x1F);
packet[<span style="color: #] = (char)<span style="color: #xFC;
NSData *data = [NSData dataWithBytesNoCopy:packet length:adtsLength freeWhenDone:YES];
H264Encoder
H264Encoder.h
H264AACEncode
Created by ZhangWen on 15/10/14.
Copyright (C) 2015年 Zhangwen. All rights reserved.
#import &Foundation/Foundation.h&
#import &AVFoundation/AVFoundation.h&
#import &VideoToolbox/VideoToolbox.h&
@protocol H264EncoderDelegate &NSObject&
- (void)gotSpsPps:(NSData*)sps pps:(NSData*)
- (void)gotEncodedData:(NSData*)data isKeyFrame:(BOOL)isKeyF
@interface H264Encoder : NSObject
- (void) initWithC
- (void) start:(int)width
height:(int)
- (void) initEncode:(int)width
height:(int)
- (void) encode:(CMSampleBufferRef )sampleB
- (void) E
@property (weak, nonatomic) NSString *
@property (weak, nonatomic) id&H264EncoderDelegate& delegate;
H264Encoder.m
H264AACEncode
Created by ZhangWen on 15/10/14.
Copyright (C) 2015年 Zhangwen. All rights reserved.
#import "H264Encoder.h"
@implementation H264Encoder
NSString * yuvF
VTCompressionSessionRef EncodingS
dispatch_queue_t aQ
CMFormatDescriptionR
CMSampleTimingInfo * timingI
@synthesize
- (void) initWithConfiguration
EncodingSession =
initialized = true;
aQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, <span style="color: #);
frameCount = <span style="color: #;
sps = NULL;
pps = NULL;
void didCompressH264(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags,
CMSampleBufferRef sampleBuffer )
NSLog(@"didCompressH264 called with status %d infoFlags %d", (int)status, (int)infoFlags);
if (status != <span style="color: #) return;
if (!CMSampleBufferDataIsReady(sampleBuffer))
NSLog(@"didCompressH264 data is not ready ");
H264Encoder* encoder = (__bridge H264Encoder*)outputCallbackRefC
// Check if we have got a key frame first
bool keyframe = !CFDictionaryContainsKey( (CFArrayGetValueAtIndex(CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true), <span style="color: #)), kCMSampleAttachmentKey_NotSync);
if (keyframe)
CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
// CFDictionaryRef extensionDict = CMFormatDescriptionGetExtensions(format);
// Get the extensions
// From the extensions get the dictionary with key "SampleDescriptionExtensionAtoms"
// From the dict, get the value for the key "avcC"
size_t sparameterSetSize, sparameterSetC
const uint8_t *sparameterS
OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, <span style="color: #, &sparameterSet, &sparameterSetSize, &sparameterSetCount, <span style="color: # );
if (statusCode == noErr)
// Found sps and now check for pps
size_t pparameterSetSize, pparameterSetC
const uint8_t *pparameterS
OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, <span style="color: #, &pparameterSet, &pparameterSetSize, &pparameterSetCount, <span style="color: # );
if (statusCode == noErr)
// Found pps
encoder-&sps = [NSData dataWithBytes:sparameterSet length:sparameterSetSize];
encoder-&pps = [NSData dataWithBytes:pparameterSet length:pparameterSetSize];
if (encoder-&_delegate)
[encoder-&_delegate gotSpsPps:encoder-&sps pps:encoder-&pps];
CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t length, totalL
char *dataP
OSStatus statusCodeRet = CMBlockBufferGetDataPointer(dataBuffer, <span style="color: #, &length, &totalLength, &dataPointer);
if (statusCodeRet == noErr) {
size_t bufferOffset = <span style="color: #;
static const int AVCCHeaderLength = <span style="color: #;
while (bufferOffset & totalLength - AVCCHeaderLength) {
// Read the NAL unit length
uint32_t NALUnitLength = <span style="color: #;
memcpy(&NALUnitLength, dataPointer + bufferOffset, AVCCHeaderLength);
// Convert the length value from Big-endian to Little-endian
NALUnitLength = CFSwapInt32BigToHost(NALUnitLength);
NSData* data = [[NSData alloc] initWithBytes:(dataPointer + bufferOffset + AVCCHeaderLength) length:NALUnitLength];
[encoder-&_delegate gotEncodedData:data isKeyFrame:keyframe];
// Move to the next NAL unit in the block buffer
bufferOffset += AVCCHeaderLength + NALUnitL
- (void) start:(int)width
height:(int)height
int frameSize = (width * height * <span style="color: #.5);
if (!initialized)
NSLog(@"H264: Not initialized");
error = @"H264: Not initialized";
dispatch_sync(aQueue, ^{
// For testing out the logic, lets read from a file and then send it to encoder to create h264 stream
// Create the compression session
OSStatus status = VTCompressionSessionCreate(NULL, width, height, kCMVideoCodecType_H264, NULL, NULL, NULL, didCompressH264, (__bridge void *)(self),
&EncodingSession);
NSLog(@"H264: VTCompressionSessionCreate %d", (int)status);
if (status != <span style="color: #)
NSLog(@"H264: Unable to create a H264 session");
error = @"H264: Unable to create a H264 session";
// Set the properties
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_AllowFrameReordering, kCFBooleanFalse);
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_MaxKeyFrameInterval, <span style="color: #0);
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_High_AutoLevel);
// Tell the encoder to start encoding
VTCompressionSessionPrepareToEncodeFrames(EncodingSession);
// Start reading from the file and copy it to the buffer
// Open the file using POSIX as this is anyway a test application
int fd = open([yuvFile UTF8String], O_RDONLY);
if (fd == -<span style="color: #)
NSLog(@"H264: Unable to open the file");
error = @"H264: Unable to open the file";
NSMutableData* theData = [[NSMutableData alloc] initWithLength:frameSize] ;
NSUInteger actualBytes = frameS
while (actualBytes & <span style="color: #)
void* buffer = [theData mutableBytes];
NSUInteger bufferSize = [theData length];
actualBytes = read(fd, buffer, bufferSize);
if (actualBytes & frameSize)
[theData setLength:actualBytes];
frameCount++;
// Create a CM Block buffer out of this data
CMBlockBufferRef BlockBuffer = NULL;
OSStatus status = CMBlockBufferCreateWithMemoryBlock(NULL, buffer, actualBytes,kCFAllocatorNull, NULL, <span style="color: #, actualBytes, kCMBlockBufferAlwaysCopyDataFlag, &BlockBuffer);
// Check for error
if (status != noErr)
NSLog(@"H264: CMBlockBufferCreateWithMemoryBlock failed with %d", (int)status);
error = @"H264: CMBlockBufferCreateWithMemoryBlock failed ";
// Create a CM Sample Buffer
CMSampleBufferRef sampleBuffer = NULL;
CMFormatDescriptionRef formatD
CMFormatDescriptionCreate ( kCFAllocatorDefault, // Allocator
kCMMediaType_Video,
&formatDescription );
CMSampleTimingInfo sampleTimingInfo = {CMTimeMake(<span style="color: #, <span style="color: #0)};
OSStatus statusCode = CMSampleBufferCreate(kCFAllocatorDefault, BlockBuffer, YES, NULL, NULL, formatDescription, <span style="color: #, <span style="color: #, &sampleTimingInfo, <span style="color: #, NULL, &sampleBuffer);
// Check for error
if (statusCode != noErr) {
NSLog(@"H264: CMSampleBufferCreate failed with %d", (int)statusCode);
error = @"H264: CMSampleBufferCreate failed ";
CFRelease(BlockBuffer);
BlockBuffer = NULL;
// Get the CV Image buffer
CVImageBufferRef imageBuffer = (CVImageBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
// Create properties
CMTime presentationTimeStamp = CMTimeMake(frameCount, <span style="color: #0);
//CMTime duration = CMTimeMake(1, DURATION);
VTEncodeInfoF
// Pass it to the encoder
statusCode = VTCompressionSessionEncodeFrame(EncodingSession,
imageBuffer,
presentationTimeStamp,
kCMTimeInvalid,
NULL, NULL, &flags);
// Check for error
if (statusCode != noErr) {
NSLog(@"H264: VTCompressionSessionEncodeFrame failed with %d", (int)statusCode);
error = @"H264: VTCompressionSessionEncodeFrame failed ";
// End the session
VTCompressionSessionInvalidate(EncodingSession);
CFRelease(EncodingSession);
EncodingSession = NULL;
error = NULL;
NSLog(@"H264: VTCompressionSessionEncodeFrame Success");
// Mark the completion
VTCompressionSessionCompleteFrames(EncodingSession, kCMTimeInvalid);
// End the session
VTCompressionSessionInvalidate(EncodingSession);
CFRelease(EncodingSession);
EncodingSession = NULL;
error = NULL;
close(fd);
- (void) initEncode:(int)width
height:(int)height
dispatch_sync(aQueue, ^{
// For testing out the logic, lets read from a file and then send it to encoder to create h264 stream
// Create the compression session
OSStatus status = VTCompressionSessionCreate(NULL, width, height, kCMVideoCodecType_H264, NULL, NULL, NULL, didCompressH264, (__bridge void *)(self),
&EncodingSession);
NSLog(@"H264: VTCompressionSessionCreate %d", (int)status);
if (status != <span style="color: #)
NSLog(@"H264: Unable to create a H264 session");
error = @"H264: Unable to create a H264 session";
// Set the properties
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_RealTime, kCFBooleanTrue);
VTSessionSetProperty(EncodingSession, kVTCompressionPropertyKey_ProfileLevel, kVTProfileLevel_H264_Main_AutoLevel);
// Tell the encoder to start encoding
VTCompressionSessionPrepareToEncodeFrames(EncodingSession);
- (void) encode:(CMSampleBufferRef )sampleBuffer
dispatch_sync(aQueue, ^{
frameCount++;
// Get the CV Image buffer
CVImageBufferRef imageBuffer = (CVImageBufferRef)CMSampleBufferGetImageBuffer(sampleBuffer);
// Create properties
CMTime presentationTimeStamp = CMTimeMake(frameCount, <span style="color: #00);
//CMTime duration = CMTimeMake(1, DURATION);
VTEncodeInfoF
// Pass it to the encoder
OSStatus statusCode = VTCompressionSessionEncodeFrame(EncodingSession,
imageBuffer,
presentationTimeStamp,
kCMTimeInvalid,
NULL, NULL, &flags);
// Check for error
if (statusCode != noErr) {
NSLog(@"H264: VTCompressionSessionEncodeFrame failed with %d", (int)statusCode);
error = @"H264: VTCompressionSessionEncodeFrame failed ";
// End the session
VTCompressionSessionInvalidate(EncodingSession);
CFRelease(EncodingSession);
EncodingSession = NULL;
error = NULL;
NSLog(@"H264: VTCompressionSessionEncodeFrame Success");
- (void) End
// Mark the completion
VTCompressionSessionCompleteFrames(EncodingSession, kCMTimeInvalid);
// End the session
VTCompressionSessionInvalidate(EncodingSession);
CFRelease(EncodingSession);
EncodingSession = NULL;
error = NULL;
阅读(...) 评论()}

我要回帖

更多关于 小米电视音频输出 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信