Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebRTC #16

Open
canvascat opened this issue Apr 14, 2021 · 2 comments
Open

WebRTC #16

canvascat opened this issue Apr 14, 2021 · 2 comments

Comments

@canvascat
Copy link
Owner

WebRTC

WebRTC(Web Real Time Communication)即“网络实时通信”,它最初是为了解决浏览器上视频通话而提出的,即两个浏览器之间直接进行视频和音频的通信,不经过服务器。后来发展到除了音频和视频,还可以传输文字和其他数据。

获取音频和视频

MediaDevices.getUserMedia()

MediaDevices.getUserMedia() 会提示用户给予使用媒体输入的许可,媒体输入会产生一个 MediaStream,里面包含了请求的媒体类型的轨道。此流可以包含一个视频轨道(来自硬件或者虚拟视频源,比如相机、视频采集设备和屏幕共享服务等等)、一个音频轨道(同样来自硬件或虚拟音频源,比如麦克风、A/D 转换器等等),也可能是其它轨道类型。

const constraints: MediaStreamConstraints = { audio: true, audio: true };
const $video: HTMLVideoElement;
navigator.mediaDevices.getUserMedia(constraints).then(
  (stream: MediaStream) => {
    $video.srcObject = stream;
    // 或者
    $video.src = URL.createObjectURL(stream);
  },
  (err) => {
    // PermissionDeniedError / NotFoundError
  }
);

其中 audioaudio 均为 MediaTrackConstraints,可以通过指定 deviceId 来使用指定的设备。

MediaDevices.enumerateDevices()

通过 enumerateDevices 就可获取到不同设备的基本信息。

navigator.mediaDevices.enumerateDevices().then((devices: MediaDeviceInfo[]) => {
  devices.forEach(({ deviceId, label, kind }) => {
    switch (kind) {
      case 'audioinput': // 麦克风
      case 'videoinput': // 摄像头
      case 'audiooutput': // 外放
      default:
        break;
    }
  });
});

MediaDevices.getDisplayMedia()

navigator.mediaDevices.getDisplayMedianavigator.mediaDevices.getUserMedia 使用方法类似,返回一个窗口内容的 MediaStream

通过上述方方法获取到 媒体流,可以借助 canvas 来实现拍照/截屏:

async function snapshot(
  stream: MediaStream,
  ctx: CanvasRenderingContext2D,
  video: HTMLVideoElement
) {
  video.autoplay = true;
  video.srcObject = stream;
  await new Promise((resolve, reject) => {
    video.onload = resolve;
    video.onerror = reject;
  });
  ctx.drawImage(video, 0, 0);
  const $link = document.createElement('a');
  $link.download = ctx.canvas.toDataURL('image/webp');
  $link.click();
}

RTC 通信

使用补充库 Adapter.js 可以抹平浏览器之间的差异。

RTCPeerConnection

RTCPeerConnection 接口代表一个由本地计算机到远端的 WebRTC 连接。该接口提供了创建,保持,监控,关闭连接的方法的实现。

// 这里使用 WebSocket 通信作为例子
let ws: WebSocket, pc: RTCPeerConnection;
async function start(stream: MediaStream, configuration?: RTCConfiguration) {
  pc = new RTCPeerConnection(configuration);
  pc.onicecandidate = (evt) => {
    const { candidate } = evt;
    ws.send(JSON.stringify({ candidate }));
  };
  stream.getTracks().forEach((track) => pc.addTrack(track, stream));
  // send
  const sdp = await pc.createOffer();
  // recv
  // const sdp = await pc.createAnswer()
  await pc.setLocalDescription(sdp);
  ws.send(JSON.stringify({ sdp }));
}
ws.onmessage = (e) => {
  const data = JSON.parse(e.data);
  pc.addIceCandidate(new RTCIceCandidate(data.candidate));
  // recv
  pc.setRemoteDescription(new RTCSessionDescription(data.sdp));
};

iceServers 可以使用 https://github.com/DamonOehlman/freeice 来提供。

RTCDataChannel

RTCDataChannel 接口代表在两者之间建立了一个双向数据通道的连接。与 WebSocket 类似。

https://developer.mozilla.org/en-US/docs/Web/API/RTCDataChannel

@canvascat
Copy link
Owner Author

@canvascat
Copy link
Owner Author

MediaStream 处理

https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement/setSinkId
HTMLMediaElement.setSinkId(sinkId) 设置用于输出的音频设备的 ID

https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
设置用于输入的音视频设备的 ID 后重新获取 MediaStream 赋给媒体 HTMLMediaElement
navigator.mediaDevices.getUserMedia(MediaStreamConstraints)
通过设置 MediaStreamConstraints 返回合乎条件的媒体流,设置 deviceId/width/height等,或设置video/audiofalse来禁止输入
设置width/height来返回指定分辨率的媒体流,支持高分辨率适应低分辨率,反之抛出OverconstrainedError
https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/applyConstraints
通过 MediaStreamTrack.applyConstraints() 来约束媒体流分辨率宽高横纵比等
https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getDisplayMedia
通过 navigator.mediaDevices.getDisplayMedia() 获取窗口内容的媒体流

通过 HTMLCanvasElement 抓取视频快照
HTMLCanvasElement.getContext('2d').drawImage(HTMLVideoElement, 0, 0, HTMLCanvasElement.width, HTMLCanvasElement.height)
https://developer.mozilla.org/en-US/docs/Web/CSS/filter
通过 filter 样式给 HTMLVideoElement 添加滤镜可以抓取带滤镜效果的视频快照

https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder
通过 MediaRecorder 实现对 MediaStream 的录制,通过 Blob 对数据进行处理

const recordedBlobs = [];
const mimeType = 'video/mp4;codecs=h264,aac';
const mediaRecorder = new MediaRecorder(window.stream, { mimeType });
mediaRecorder.onstop = (event) => {
  const blob = new Blob(recordedBlobs, { type: 'video/webm' });
  const url = window.URL.createObjectURL(blob);
  // Do something
};
mediaRecorder.ondataavailable = (e) => {
  e.data && e.data.size > 0 && recordedBlobs.push(e.data);
};
mediaRecorder.start();

https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack
对媒体轨道进行处理,获取每个可调节属性的值或者范围 MediaStreamTrack.getCapabilities(),并使用 MediaStreamTrack.applyConstraints() 调节。
例子:https://webrtc.github.io/samples/src/content/getusermedia/pan-tilt-zoom/

HTMLMediaElement.captureStream() 获取媒体元素的媒体流对象,可以作用于HTMLCanvasElement
通过将原视频绘制到canvas上再进行处理后putImageData到用于输出的canvas上,可以对流媒体内容实时处理,通常配合requestAnimationFrame/Worker使用
例子:https://webrtc.github.io/samples/src/content/capture/canvas-filter/ & https://webrtc.github.io/samples/src/content/capture/worker-process

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant