Skip to content

Commit

Permalink
Auto generate codes for native sdk 4.3.1 (#1729)
Browse files Browse the repository at this point in the history
Auto generate codes for native sdk version 4.3.1

*This pull request is opened by bot*

Co-authored-by: littleGnAl <littleGnAl@users.noreply.github.com>
  • Loading branch information
github-actions[bot] and littleGnAl authored Apr 29, 2024
1 parent 5cf0a46 commit b1c7e1b
Show file tree
Hide file tree
Showing 21 changed files with 1,085 additions and 948 deletions.
119 changes: 65 additions & 54 deletions lib/src/agora_base.dart

Large diffs are not rendered by default.

351 changes: 182 additions & 169 deletions lib/src/agora_base.g.dart

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion lib/src/agora_log.g.dart

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

67 changes: 49 additions & 18 deletions lib/src/agora_media_base.dart
Original file line number Diff line number Diff line change
Expand Up @@ -65,11 +65,11 @@ enum VideoSourceType {
@JsonValue(10)
videoSourceTranscoded,

/// 11: (For Windows and macOS only) The third camera.
/// 11: (For Android, Windows, and macOS only) The third camera.
@JsonValue(11)
videoSourceCameraThird,

/// 12: (For Windows and macOS only) The fourth camera.
/// 12: (For Android, Windows, and macOS only) The fourth camera.
@JsonValue(12)
videoSourceCameraFourth,

Expand Down Expand Up @@ -267,7 +267,7 @@ enum MediaSourceType {
@JsonValue(5)
secondaryScreenSource,

/// 6. Custom video source.
/// 6: Custom video source.
@JsonValue(6)
customVideoSource,

Expand Down Expand Up @@ -944,7 +944,7 @@ class VideoFrame {
@JsonKey(name: 'rotation')
final int? rotation;

/// The Unix timestamp (ms) when the video frame is rendered. This timestamp can be used to guide the rendering of the video frame. It is required.
/// The Unix timestamp (ms) when the video frame is rendered. This timestamp can be used to guide the rendering of the video frame. This parameter is required.
@JsonKey(name: 'renderTimeMs')
final int? renderTimeMs;

Expand Down Expand Up @@ -1083,7 +1083,7 @@ class AudioFrameObserverBase {
/// Gets the captured audio frame.
///
/// To ensure that the data format of captured audio frame is as expected, Agora recommends that you set the audio data format as follows: After calling setRecordingAudioFrameParameters to set the audio data format, call registerAudioFrameObserver to register the audio observer object, the SDK will calculate the sampling interval according to the parameters set in this method, and triggers the onRecordAudioFrame callback according to the sampling interval.
/// Due to the limitations of Flutter, this callback does not support sending processed audio data back to the SDK.
/// Due to framework limitations, this callback does not support sending processed audio data back to the SDK.
///
/// * [audioFrame] The raw audio data. See AudioFrame.
/// * [channelId] The channel ID.
Expand All @@ -1093,7 +1093,7 @@ class AudioFrameObserverBase {
/// Gets the raw audio frame for playback.
///
/// To ensure that the data format of audio frame for playback is as expected, Agora recommends that you set the audio data format as follows: After calling setPlaybackAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onPlaybackAudioFrame callback according to the sampling interval.
/// Due to the limitations of Flutter, this callback does not support sending processed audio data back to the SDK.
/// Due to framework limitations, this callback does not support sending processed audio data back to the SDK.
///
/// * [audioFrame] The raw audio data. See AudioFrame.
/// * [channelId] The channel ID.
Expand All @@ -1103,7 +1103,7 @@ class AudioFrameObserverBase {
/// Retrieves the mixed captured and playback audio frame.
///
/// To ensure that the data format of mixed captured and playback audio frame meets the expectations, Agora recommends that you set the data format as follows: After calling setMixedAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onMixedAudioFrame callback according to the sampling interval.
/// Due to the limitations of Flutter, this callback does not support sending processed audio data back to the SDK.
/// Due to framework limitations, this callback does not support sending processed audio data back to the SDK.
///
/// * [audioFrame] The raw audio data. See AudioFrame.
/// * [channelId] The channel ID.
Expand All @@ -1113,7 +1113,7 @@ class AudioFrameObserverBase {
/// Gets the in-ear monitoring audio frame.
///
/// In order to ensure that the obtained in-ear audio data meets the expectations, Agora recommends that you set the in-ear monitoring-ear audio data format as follows: After calling setEarMonitoringAudioFrameParameters to set the audio data format and registerAudioFrameObserver to register the audio frame observer object, the SDK calculates the sampling interval according to the parameters set in the methods, and triggers the onEarMonitoringAudioFrame callback according to the sampling interval.
/// Due to the limitations of Flutter, this callback does not support sending processed audio data back to the SDK.
/// Due to framework limitations, this callback does not support sending processed audio data back to the SDK.
///
/// * [audioFrame] The raw audio data. See AudioFrame.
final void Function(AudioFrame audioFrame)? onEarMonitoringAudioFrame;
Expand Down Expand Up @@ -1165,7 +1165,7 @@ class AudioFrame {
@JsonKey(name: 'samplesPerChannel')
final int? samplesPerChannel;

/// The number of bytes per sample. The number of bytes per audio sample, which is usually 16-bit (2-byte).
/// The number of bytes per sample. For PCM, this parameter is generally set to 16 bits (2 bytes).
@JsonKey(name: 'bytesPerSample')
final BytesPerSample? bytesPerSample;

Expand Down Expand Up @@ -1319,7 +1319,7 @@ class AudioFrameObserver extends AudioFrameObserverBase {

/// Retrieves the audio frame of a specified user before mixing.
///
/// Due to the limitations of Flutter, this callback does not support sending processed audio data back to the SDK.
/// Due to framework limitations, this callback does not support sending processed audio data back to the SDK.
///
/// * [channelId] The channel ID.
/// * [uid] The user ID of the specified user.
Expand Down Expand Up @@ -1430,10 +1430,7 @@ class VideoFrameObserver {

/// Occurs each time the SDK receives a video frame captured by local devices.
///
/// After you successfully register the video frame observer, the SDK triggers this callback each time it receives a video frame. In this callback, you can get the video data captured by local devices. You can then pre-process the data according to your scenarios.
/// The video data that this callback gets has not been pre-processed such as watermarking, cropping, and rotating.
/// If the video data type you get is RGBA, the SDK does not support processing the data of the alpha channel.
/// Due to the limitations of Flutter, this callback does not support sending processed video data back to the SDK.
/// You can get raw video data collected by the local device through this callback.
///
/// * [sourceType] Video source types, including cameras, screens, or media player. See VideoSourceType.
/// * [videoFrame] The video frame. See VideoFrame. The default value of the video frame data format obtained through this callback is as follows:
Expand All @@ -1447,7 +1444,7 @@ class VideoFrameObserver {
/// Occurs each time the SDK receives a video frame before encoding.
///
/// After you successfully register the video frame observer, the SDK triggers this callback each time it receives a video frame. In this callback, you can get the video data before encoding and then process the data according to your particular scenarios.
/// Due to the limitations of Flutter, this callback does not support sending processed video data back to the SDK.
/// Due to framework limitations, this callback does not support sending processed video data back to the SDK.
/// The video data that this callback gets has been preprocessed, with its content cropped and rotated, and the image enhanced.
///
/// * [videoFrame] The video frame. See VideoFrame. The default value of the video frame data format obtained through this callback is as follows:
Expand All @@ -1467,7 +1464,7 @@ class VideoFrameObserver {
///
/// After you successfully register the video frame observer, the SDK triggers this callback each time it receives a video frame. In this callback, you can get the video data sent from the remote end before rendering, and then process it according to the particular scenarios.
/// If the video data type you get is RGBA, the SDK does not support processing the data of the alpha channel.
/// Due to the limitations of Flutter, this callback does not support sending processed video data back to the SDK.
/// Due to framework limitations, this callback does not support sending processed video data back to the SDK.
///
/// * [videoFrame] The video frame. See VideoFrame. The default value of the video frame data format obtained through this callback is as follows:
/// Android: I420 or RGB (GLES20.GL_TEXTURE_2D)
Expand Down Expand Up @@ -1688,14 +1685,48 @@ class MediaRecorderConfiguration {
Map<String, dynamic> toJson() => _$MediaRecorderConfigurationToJson(this);
}

/// @nodoc
/// Facial information observer.
///
/// You can call registerFaceInfoObserver to register or unregister the FaceInfoObserver object.
class FaceInfoObserver {
/// @nodoc
const FaceInfoObserver({
this.onFaceInfo,
});

/// @nodoc
/// Occurs when the facial information processed by speech driven extension is received.
///
/// * [outFaceInfo] Output parameter, the JSON string of the facial information processed by the voice driver plugin, including the following fields:
/// faces: Object sequence. The collection of facial information, with each face corresponding to an object.
/// blendshapes: Object. The collection of face capture coefficients, named according to ARkit standards, with each key-value pair representing a blendshape coefficient. The blendshape coefficient is a floating point number with a range of [0.0, 1.0].
/// rotation: Object sequence. The rotation of the head, which includes the following three key-value pairs, with values as floating point numbers ranging from -180.0 to 180.0:
/// pitch: Head pitch angle. A positve value means looking down, while a negative value means looking up.
/// yaw: Head yaw angle. A positve value means turning left, while a negative value means turning right.
/// roll: Head roll angle. A positve value means tilting to the right, while a negative value means tilting to the left.
/// timestamp: String. The timestamp of the output result, in milliseconds. Here is an example of JSON:
/// {
/// "faces":[{
/// "blendshapes":{
/// "eyeBlinkLeft":0.9, "eyeLookDownLeft":0.0, "eyeLookInLeft":0.0, "eyeLookOutLeft":0.0, "eyeLookUpLeft":0.0,
/// "eyeSquintLeft":0.0, "eyeWideLeft":0.0, "eyeBlinkRight":0.0, "eyeLookDownRight":0.0, "eyeLookInRight":0.0,
/// "eyeLookOutRight":0.0, "eyeLookUpRight":0.0, "eyeSquintRight":0.0, "eyeWideRight":0.0, "jawForward":0.0,
/// "jawLeft":0.0, "jawRight":0.0, "jawOpen":0.0, "mouthClose":0.0, "mouthFunnel":0.0, "mouthPucker":0.0,
/// "mouthLeft":0.0, "mouthRight":0.0, "mouthSmileLeft":0.0, "mouthSmileRight":0.0, "mouthFrownLeft":0.0,
/// "mouthFrownRight":0.0, "mouthDimpleLeft":0.0, "mouthDimpleRight":0.0, "mouthStretchLeft":0.0, "mouthStretchRight":0.0,
/// "mouthRollLower":0.0, "mouthRollUpper":0.0, "mouthShrugLower":0.0, "mouthShrugUpper":0.0, "mouthPressLeft":0.0,
/// "mouthPressRight":0.0, "mouthLowerDownLeft":0.0, "mouthLowerDownRight":0.0, "mouthUpperUpLeft":0.0, "mouthUpperUpRight":0.0,
/// "browDownLeft":0.0, "browDownRight":0.0, "browInnerUp":0.0, "browOuterUpLeft":0.0, "browOuterUpRight":0.0,
/// "cheekPuff":0.0, "cheekSquintLeft":0.0, "cheekSquintRight":0.0, "noseSneerLeft":0.0, "noseSneerRight":0.0,
/// "tongueOut":0.0
/// },
/// "rotation":{"pitch":30.0, "yaw":25.5, "roll":-15.5},
///
/// }],
/// "timestamp":"654879876546"
/// }
///
/// Returns
/// true : Facial information JSON parsing successful. false : Facial information JSON parsing failed.
final void Function(String outFaceInfo)? onFaceInfo;
}

Expand Down
Loading

0 comments on commit b1c7e1b

Please sign in to comment.