Commit c1f6d3e6 authored by Simon Morlat's avatar Simon Morlat

updated documentation and conferencing api cleanups

parent 698cadf4
Project : mediastreamer2 - a modular sound and video processing and streaming
Email : simon.morlat_at_linphone.org
License : GPL
Home Page : http://savannah.gnu.org/projects/linphone
License : GPLv2 or Commercial licensing
Home Page : http://www.mediastreamer2.com
Mediastreamer2 is a GPL licensed library to make audio and
Commercial support and licensing is provided by Belledonne Communications
http://www.belledonne-communications.com
Mediastreamer2 is a library to make audio and
video real-time streaming and processing. Written in pure C,
it is based upon the ortp library.
......@@ -20,61 +23,6 @@ file.
There is a doxygen documentation for more information.
Features:
--------
mediastreamer2 already provides a large set of filters.
Here is a complete list of built-in filters.
All supported platforms:
* RTP receiver
* RTP sender
* tee (duplicate data)
Audio Filters:
* audio capture & playback
* mme API (windows)
* alsa API (linux)
* oss and oss4 apis (linux)
* MacOS X Audio Units
* MacOS X Audio Queues (discouraged)
* iOS Audio Unit (Voice Processing audio unit for iPhone)
* Android sound system
* portaudio API (macosx and other)
* several audio encoder/decoder: PCMU, PCMA, speex, gsm
* wav file reader.
* wav file recorder.
* resampler.
* conference bridge.
* volume analyser, gain control, and automatic gain control.
* acoustic echo canceller.
* dtmf and custom tone generation filter.
* custom tone detection
* parametric equalizer, can be used to compensate the spectral response of a bad quality speaker or microphone
* echo limiter for cases where echo cancellation cannot work because of heavy distorsion.
Video Filters:
* video capture
* v4w API (windows, deprecated)
* directshow API (windows)
* video4linux and video4linux2 APIs (linux)
* QTKit API (macosx)
* video display
* vfw API (windows)
* SDL API (linux, macosx [>=1.3], ...)
* Android native display
* several audio encoder/decoder: H263-1998, MP4V-ES, H264, theora
* image resizer.
* format converter. (RBG24, I420...)
Plugin Filters:
* iLBC decoder/encoder.
* H264 codec, based on x264
Note that, you can build your own components/filters to do your
own processing or support other codecs.
Compilation and installation
----------------------------
......@@ -100,16 +48,11 @@ More instructions and advices can be found for the mingw compilation procedure i
Contact information:
--------------------
For more information on mediastreamer2, any contributions, or any remarks,
you can contact me at <simon.morlat_at_linphone.org>.
Use the *linphone* mailing list for question about mediastreamer2.
<linphone-developers@nongnu.org>.
Subscribe here:
https://savannah.nongnu.org/mail/?group=linphone
Subscribe by writing to:
<linphone-developers-request@nongnu.org> with a subject set to "subscribe".
Commercial support and licensing is provided by Belledonne Communications
http://www.belledonne-communications.com
/**
* @mainpage
* Project Website: http://www.linphone.org
*
* Project Website: http://www.mediastreamer2.com
*
* @verbinclude README
*
*/
/**
* @defgroup mediastreamer2 mediastreamer2 library - a modular sound and video processing and streaming
* @brief mediastreamer2
*
* @see http://www.linphone.org/eng/documentation/dev/mediastreamer2.html
* @defgroup mediastreamer2_intro Introduction to mediastreamer2 concepts.
* @brief Introduction
*
* @section what_is_it What is mediastreamer2
*
......@@ -18,8 +16,10 @@
* mediastreamer2 is GPL (COPYING). Please understand the licencing details
* before using it!
*
*Commercial support and licensing is provided by Belledonne Communications
*http://www.belledonne-communications.com
* Commercial support and licensing is provided by Belledonne Communications
* http://www.belledonne-communications.com
*
* @see http://www.linphone.org/eng/documentation/dev/mediastreamer2.html
*
* @section definitions Some definitions.
*
......@@ -41,7 +41,7 @@
* data from OUTPUT pins to INPUT pins and will be responsible for
* running filters.
*
* @section when_do_i_use_mediastreamer2 How do I use mediastremer2?
* @section how_do_i_use_mediastreamer2 How do I use mediastremer2?
*
* Mediastreamer2 can be used for a lot of different purpose. The primary
* use is to manage RTP audio and video session. You will need to use
......@@ -123,26 +123,21 @@
* H264 decoder/encoder.
* </PRE>
*
* @section what_thanks Thanks
*
* Thanks to all the contributors and to all bug reporters.
* Enjoy mediastreamer2!
*
*/
/**
* @defgroup mediastreamer2_api Mediastreamer2 API
* @brief All API to manage mediastreamer2 library.
* @defgroup mediastreamer2_api Mediastreamer2's base APIs
* @brief Base APIs of mediastreamer2
*
* Mediastreamer2 expose a low level API to directly control filters, chain and have them running.
*/
/**
* @defgroup mediastreamer2_init Init API - manage mediastreamer2 library.
* @defgroup mediastreamer2_init Starting mediastreamer2 library.
* @ingroup mediastreamer2_api
* @brief Init API to manage mediastreamer2 library.
* @brief Starting mediastreamer2 library.
*
* This file provide the API needed to initialize
* and reset the mediastreamer2 library.
*/
/**
......@@ -150,8 +145,6 @@
* @ingroup mediastreamer2_api
* @brief Sound Card API to manage audio capture/play filters.
*
* This file provide the API needed to manage
* soundcard filters.
*/
/**
......@@ -159,8 +152,8 @@
* @ingroup mediastreamer2_api
* @brief Filter API to manage mediastreamer2 filters.
*
* This file provide the API needed to create, link,
* unlink, find and destroy filter.
* This section documents the API needed to create, link,
* unlink, find and destroy filters.
*
* It also provides definitions if you wish to implement
* your own filters.
......@@ -171,11 +164,35 @@
* @ingroup mediastreamer2_api
* @brief Ticker API to manage mediastreamer2 graphs.
*
* This file provide the API needed to create, start
* and stop a graph.
* Describes the ticker API. The ticker is the thread responsible for scheduling audio & video processing for
* one or several filter graphs.
*/
/**
* @defgroup mediastreamer2_high_api Mediastreamer2's high level APIs
* @brief High level APIs of mediastreamer2
*
* The high level apis are designed to provide an easy way to create audio or video processing graphs for
* VoIP.
*/
/**
* @defgroup audio_stream_api Creating typical VoIP audio streams.
* @ingroup mediastreamer2_high_api
* @brief Audio streaming API - easily run audio streams from soundcard or wav files to RTP.
**/
/**
* @defgroup mediastreamer2_audio_conference Audio conferencing API - easily create conferences.
* @ingroup mediastreamer2_high_api
* @brief Audio conferencing API - easily create conferences.
*
*
*/
/**
* @page mediastreamer2_readme README
* @verbinclude README
......
/**
* @defgroup filters
* @defgroup filters Filters documentation
* @ingroup mediastreamer2
<H1>Filter lists</H1>
/**
* @defgroup mssilk - SILK (Skype codec) plugin
* @ingroup filters
......@@ -37,5 +35,5 @@
* <li><b>#MS_FILTER_SET_SAMPLE_RATE</b> Set output sampling rate. This value is internally mapped to API sampling rate.Supported value are 8000, 12000, 16000, 24000, 32000, 44000 and 48000. This value can be changed at any time.
*</ul>
*<br>
*/
*/
\ No newline at end of file
*
*/
/**
* @defgroup howto0_samplegraph Howto 1: build a sample audio graph.
* @ingroup mediastreamer2
* @ingroup mediastreamer2_intro
<H1>Initialize mediastreamer2</H1>
......
......@@ -83,6 +83,14 @@ struct _AudioStream
extern "C" {
#endif
/**
* @addtogroup audio_stream_api
* @{
**/
/**
* The AudioStream holds all resources to create and run typical VoIP audiostream.
**/
typedef struct _AudioStream AudioStream;
struct _RingStream
......@@ -104,11 +112,33 @@ MS2_PUBLIC AudioStream *audio_stream_start (RtpProfile * prof, int locport, cons
MS2_PUBLIC AudioStream *audio_stream_start_with_sndcards(RtpProfile * prof, int locport, const char *remip4, int remport, int payload_type, int jitt_comp, MSSndCard *playcard, MSSndCard *captcard, bool_t echocancel);
MS2_PUBLIC int audio_stream_start_with_files (AudioStream * stream, RtpProfile * prof,
const char *remip, int remport, int rem_rtcp_port,
int pt, int jitt_comp,
const char * infile, const char * outfile);
/**
* Starts an audio stream from/to local wav files or soundcards.
*
* This method starts the processing of the audio stream, that is playing from wav file or soundcard, voice processing, encoding,
* sending through RTP, receiving from RTP, decoding, voice processing and wav file recording or soundcard playback.
*
*
* @param stream an AudioStream previously created with audio_stream_new().
* @param prof a RtpProfile containing all PayloadType possible during the audio session.
* @param remip remote IP address where to send the encoded audio.
* @param remport remote IP port where to send the encoded audio
* @param rem_rtcp_port remote port for RTCP.
* @param payload_type payload type index to use for the sending stream. This index must point to a valid PayloadType in the RtpProfile.
* @param jitt_comp Nominal jitter buffer size in milliseconds.
* @param infile path to wav file to play out (can be NULL)
* @param outfile path to wav file to record into (can be NULL)
* @param playcard The soundcard to be used for playback (can be NULL)
* @param captcard The soundcard to be used for catpure. (can be NULL)
* @param echo_cancel whether echo cancellation is to be performed.
* @returns 0 if sucessful, -1 otherwise.
**/
MS2_PUBLIC int audio_stream_start_full(AudioStream *stream, RtpProfile *profile, const char *remip,int remport,
int rem_rtcp_port, int payload,int jitt_comp, const char *infile, const char *outfile,
MSSndCard *playcard, MSSndCard *captcard, bool_t use_ec);
......@@ -120,16 +150,39 @@ MS2_PUBLIC void audio_stream_set_rtcp_information(AudioStream *st, const char *c
MS2_PUBLIC void audio_stream_play_received_dtmfs(AudioStream *st, bool_t yesno);
/* those two function do the same as audio_stream_start() but in two steps
this is useful to make sure that sockets are open before sending an invite;
or to start to stream only after receiving an ack.*/
/**
* Creates an AudioStream object listening on a RTP port.
* @param locport the local UDP port to listen for RTP packets.
* @param ipv6 TRUE if ipv6 must be used.
* @returns a new AudioStream.
**/
MS2_PUBLIC AudioStream *audio_stream_new(int locport, bool_t ipv6);
/**
* Starts an audio stream from local soundcards.
*
* This method starts the processing of the audio stream, that is capture from soundcard, voice processing, encoding,
* sending through RTP, receiving from RTP, decoding, voice processing and soundcard playback.
*
* @param stream an AudioStream previously created with audio_stream_new().
* @param prof a RtpProfile containing all PayloadType possible during the audio session.
* @param remip remote IP address where to send the encoded audio.
* @param remport remote IP port where to send the encoded audio
* @param rem_rtcp_port remote port for RTCP.
* @param payload_type payload type index to use for the sending stream. This index must point to a valid PayloadType in the RtpProfile.
* @param jitt_comp Nominal jitter buffer size in milliseconds.
* @param playcard The soundcard to be used for playback
* @param captcard The soundcard to be used for catpure.
* @param echo_cancel whether echo cancellation is to be performed.
**/
MS2_PUBLIC int audio_stream_start_now(AudioStream * stream, RtpProfile * prof, const char *remip, int remport, int rem_rtcp_port, int payload_type, int jitt_comp,MSSndCard *playcard, MSSndCard *captcard, bool_t echo_cancel);
MS2_PUBLIC void audio_stream_set_relay_session_id(AudioStream *stream, const char *relay_session_id);
/*returns true if we are still receiving some data from remote end in the last timeout seconds*/
MS2_PUBLIC bool_t audio_stream_alive(AudioStream * stream, int timeout);
/*execute background tasks related to audio processing*/
/**
* Executes background low priority tasks related to audio processing (RTP statistics analysis).
* It should be called periodically, for example with an interval of 100 ms or so.
*/
MS2_PUBLIC void audio_stream_iterate(AudioStream *stream);
/*enable echo-limiter dispositve: one MSVolume in input branch controls a MSVolume in the output branch*/
......@@ -309,5 +362,8 @@ MS2_PUBLIC bool_t ms_is_ipv6(const char *address);
}
#endif
/**
* @}
**/
#endif
......@@ -22,37 +22,48 @@ Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
* Convenient API to create and manage audio conferences.
*/
#ifndef conference_h
#define conference_h
#ifndef msconference_h
#define msconference_h
#include "mediastreamer2/mediastream.h"
typedef struct _MSAudioConferenceParams{
int samplerate;
}MSAudioConferenceParams;
/**
* @addtogroup mediastreamer2_audio_conference
* @{
*/
struct _MSAudioConference{
MSTicker *ticker;
MSFilter *mixer;
MSAudioConferenceParams params;
int nmembers;
/**
* Structure that holds audio conference parameters
**/
struct _MSAudioConferenceParams{
int samplerate; /**< Conference audio sampling rate in Hz: 8000, 16000 ...*/
};
/**
* Typedef to structure that holds conference parameters
**/
typedef struct _MSAudioConferenceParams MSAudioConferenceParams;
/**
* The MSAudioConference is the object representing an audio conference.
*
* First, the conference has to be created with ms_audio_conference_new(), with parameters supplied.
* Then, participants to the conference can be added with ms_audio_conference_add_member().
* The MSAudioConference takes in charge the mixing and dispatching of the audio to the participants.
* If participants (MSAudioEndpoint) are using sampling rate different from the conference, then sample rate converters are automatically added
* and configured.
* Participants can be removed from the conference with ms_audio_conference_remove_member().
* The conference processing is performed in a new thread run by a MSTicker object, which is owned by the conference.
* When all participants are removed, the MSAudioConference object can then be safely destroyed with ms_audio_conference_destroy().
**/
typedef struct _MSAudioConference MSAudioConference;
struct _MSAudioEndpoint{
AudioStream *st;
MSFilter *in_resampler,*out_resampler;
MSCPoint out_cut_point;
MSCPoint in_cut_point;
MSCPoint in_cut_point_prev;
MSCPoint mixer_in;
MSCPoint mixer_out;
MSAudioConference *conference;
int pin;
int samplerate;
};
/**
* The MSAudioEndpoint represents a participant in the conference.
* It can be constructed from an existing AudioStream object with
* ms_audio_endpoint_get_from_stream().
**/
typedef struct _MSAudioEndpoint MSAudioEndpoint;
......@@ -61,21 +72,114 @@ typedef struct _MSAudioEndpoint MSAudioEndpoint;
extern "C" {
#endif
/**
* Creates a conference.
* @param params a MSAudioConferenceParams structure, containing conference parameters.
* @returns a MSAudioConference object.
**/
MSAudioConference * ms_audio_conference_new(const MSAudioConferenceParams *params);
/**
* Gets conference's current parameters.
* @param obj the conference.
* @returns a read-only pointer to the conference parameters.
**/
const MSAudioConferenceParams *ms_audio_conference_get_params(MSAudioConference *obj);
/**
* Adds a participant to the conference.
* @param obj the conference
* @param ep the participant, represented as a MSAudioEndpoint object
**/
void ms_audio_conference_add_member(MSAudioConference *obj, MSAudioEndpoint *ep);
/**
* Removes a participant from the conference.
* @param obj the conference
* @param ep the participant, represented as a MSAudioEndpoint object
**/
void ms_audio_conference_remove_member(MSAudioConference *obj, MSAudioEndpoint *ep);
/**
* Mutes or unmutes a participant.
*
* @param obj the conference
* @param ep the participant, represented as a MSAudioEndpoint object
*
* By default all participants are unmuted.
**/
void ms_audio_conference_mute_member(MSAudioConference *obj, MSAudioEndpoint *ep, bool_t muted);
int ms_audio_conference_size(MSAudioConference *obj);
/**
* Returns the size (ie the number of participants) of a conference.
* @param obj the conference
**/
int ms_audio_conference_get_size(MSAudioConference *obj);
/**
* Destroys a conference.
* @param obj the conference
* All participants must have been removed before destroying the conference.
**/
void ms_audio_conference_destroy(MSAudioConference *obj);
MSAudioEndpoint * ms_audio_endpoint_get_from_stream(AudioStream *st, bool_t is_remote);
void ms_audio_endpoint_release_from_stream(MSAudioEndpoint *obj);
/**
* Creates an MSAudioEndpoint from an existing AudioStream.
*
* In order to create graphs for audio processing of each participant, the AudioStream object is used, because
* this object already handles all the processing for volume control, encoding, decoding, etc...
*
* The construction of the participants depends whether it is a remote participant, that is somebody in the network
* sending and receiving audio through RTP, or a local participant, that is somebody using the local soundcard to capture
* and play audio.
*
* To create a remote participant, first create and start an AudioStream for the participant with audio_stream_new() and
* audio_stream_start_with_files(), given NULL arguments as input and output files.
* This participant does not interact with soundcards, this is why we suggest to use audio_stream_start_full() to avoid
* holding any reference to the sound system.
* Then, create a MSAudioEndpoint representing this participant by calling ms_audio_endpoint_get_from_stream() with
* is_remote=TRUE.
*
* To create a local participant, first create and start an AudioStream with audio_stream_new() and audio_stream_start_full(),
* with real soundcard arguments.
* Arguments controlling RTP should be filled with placeholders value and will not be used for conferencing.
* Then, create a MSAudioEndpoint representing this local participant by calling ms_audio_endpoint_get_from_stream()
* with the audiostream and is_remote=FALSE.<br>
* For example:<br>
* <PRE>
* AudioStream *st=audio_stream_new(65000,FALSE);
* audio_stream_start_full(st, conf->local_dummy_profile,
* "127.0.0.1",
* 65000,
* 65001,
* 0,
* 40,
* NULL,
* NULL,
* playcard,
* captcard,
* needs_echocancellation,
* );
* MSAudioEndpoint *local_endpoint=ms_audio_endpoint_get_from_stream(st,FALSE);
* </PRE>
**/
MSAudioEndpoint * ms_audio_endpoint_get_from_stream(AudioStream *st, bool_t is_remote);
/**
* Destroys a MSAudioEndpoint that was created from an AudioStream with ms_audio_endpoint_get_from_stream().
* The AudioStream can then be destroyed if needed.
**/
void ms_audio_endpoint_release_from_stream(MSAudioEndpoint *obj);
#ifdef __cplusplus
}
#endif
/**
* @}
*/
#endif
......@@ -21,6 +21,25 @@ Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#include "mediastreamer2/msconference.h"
#include "mediastreamer2/msaudiomixer.h"
struct _MSAudioConference{
MSTicker *ticker;
MSFilter *mixer;
MSAudioConferenceParams params;
int nmembers;
};
struct _MSAudioEndpoint{
AudioStream *st;
MSFilter *in_resampler,*out_resampler;
MSCPoint out_cut_point;
MSCPoint in_cut_point;
MSCPoint in_cut_point_prev;
MSCPoint mixer_in;
MSCPoint mixer_out;
MSAudioConference *conference;
int pin;
int samplerate;
};
extern MSTickerPrio __ms_get_default_prio(bool_t is_video);
......@@ -200,6 +219,6 @@ void ms_audio_endpoint_destroy(MSAudioEndpoint *ep){
ms_free(ep);
}
int ms_audio_conference_size(MSAudioConference *obj){
return obj == NULL ? 0 : obj->nmembers;
int ms_audio_conference_get_size(MSAudioConference *obj){
return obj->nmembers;
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment