forked from silicontrip/mjpegtools
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmjpeg_howto.txt
2767 lines (1921 loc) · 105 KB
/
mjpeg_howto.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
MJPEG HOWTO - An introduction to the MJPEG-tools
Praschinger Bernhard
v1.50
MJPEG capture/editting/replay and MPEG encoding toolset description
______________________________________________________________________
Table of Contents
1. Introduction
2. Unsorted list of useful Hints
2.1 Some books we found usefull
3. Recording videos
3.1 lavrec examples
3.2 Other recording hints
3.3 Some information about the typical lavrec output while recording
3.4 Notes about "interlace field order - what can go wrong and how to fix it"
3.4.1 There are three bad things that can happen with fields
3.4.2 How can I recognize if I have one of these Problems ?
3.4.3 How can you fix it?
3.4.4 Hey, what about NTSC movies ?
4. Creating videos from other sources
4.1 Creating videos from images
4.2 Decoding streams with mplayer
4.3 Decoding MPEG2 streams with mpeg2dec
4.4 Other things to know
5. Checking if recording was successful
6. Edit the video
6.1 Edit with glav
6.2 Unify videos
6.3 Separate sound
6.4 Separate images
6.5 Creating movie transitions
7. Converting the stream to MPEG or DIVx videos
7.1 Creating sound
7.2 Converting video
7.2.1 Scaling
7.3 Putting the streams together
7.4 Creating MPEG1 Videos
7.4.1 MPEG1 Audio creation Example
7.4.2 MPEG1 Video creation Example
7.4.3 MPEG1 Multiplexing Example
7.5 Creating MPEG2 Videos
7.5.1 MPEG2 Audio creation Example
7.5.2 MPEG2 Video creation Example
7.5.2.1 Which values should be used for VBR Encoding
7.5.2.2 Encoding destination TV (interlaced) or Monitor (progressive)
7.5.3 MPEG2 Multiplexing Example
7.6 Creating Video-CD's
7.6.1 VCD Audio creation Example
7.6.2 VCD Video creation Example
7.6.3 VCD Multiplexing Example
7.6.4 Creating the CD
7.6.5 Notes
7.6.6 Storing MPEGs
7.7 Creating SVCD
7.7.1 SVCD Audio creation Example
7.7.2 SVCD Video creation Example
7.7.3 SVCD Multiplexing Example
7.7.4 SVCD Creating the CD
7.8 Creating DVD's
7.8.1 DVD Audio creation Example
7.8.2 DVD Video creation Example
7.8.3 DVD Multiplexing Example
7.8.4 DVD creation Example
7.9 Creating DIVX Videos
7.9.1 lav2avi.sh
8. Optimizing the stream
8.1 Scaling and offset correction
8.2 Frame rate conversion
9. Transcoding of existing MPEG-2
9.1 If you want to do every step on your own it will look something like this
10. Trading Quality/Speed
10.1 Creating streams to be played from disk using Software players
11. SMP and distributed Encoding
12. Interoperability
______________________________________________________________________
1. Introduction
I wrote this things down, because I had many sheets with notes on
them. This should be some kind of summary of collected knowledge of
this sheets. Andrew Stevens helped with encoding and VCD knowledge
and hints.
The mjpegtools are a set of programs that can do recording, playback,
editing and eventual MPEG compression of audio and video under Linux.
Although primarily intended for use with capture / playback boards
based on the Zoran ZR36067 MJPEG codec chip, the mjpegtools can easily
be used to process and compress MJPEG video streams captured using
xawtv using simple frame-buffer devices.
The HOWTO for the tools intended to give an an introduction to the
MJPEG-tools and the creation of MPEG 1/2 videos. VCD and SVCD, and the
transcoding of existing mpeg streams.
For more information about the programs read the corresponding man-
page.
Achtung es gibt auch eine deutsche Version bei:
http://sourceforge.net/projects/mjpeg
There is also a manpage of this text, you can read it with "man
mjpegtools" if installed. We also have a info version you should be
able to read it with info
The text version of this text is available via cvs, you should get it
with a tarball or the precompiled package (RPM and deb).
In the following picture you see the typical workflow when you record
a video. Cut it afterwards and encode it. In the picture you also see
the connections to other programs. These parts are in grey, the parts
in blue can be done with the mjpegtools.
Video encoding workflow
2. Unsorted list of useful Hints
You have to compile and install the mjpeg_play package, for this read
the README & REQUIRED_SOFTWARE & INSTALL. If you do not want to
compile it, you can download the mjpeg .RPM or .DEB package at
Sourceforge.
There is a script in the scripts/ directory. This script is something
that show's you a way how it can be done. It also creates (under
certain circumstances) videos that look quite good. Better videos you
only get by tuning the parameters yourself.
If you use a Linux kernel from the 2.4 series, you will usually have
to load the drivers for the Buz or DC10 or LML33 cards.So you have to
run the update script providing as option the name of your card you
have. The script is usually in /usr/src/driver-zoran/. The zoran
kernel driver below the kernel 2.4.4 do not work. You have to use the
driver available from: http://mjpeg.sourceforge.net/driver-zoran
In the 2.6. Linux kernel is the driver for the zoran cards included,
you just need to make soure that it is loaded correct.
The driver for the Matrox Marvel card also works, more information
about it: http://marvel.sourceforge.net
If you compile the tools on a P6 based computer (PPro, P-II, P-III,
P-4, Athlon,Duron) then never try to let them run on a P5 based
computer (Pentium, Pentium-MMX, K6, K6-x, Cyrix, Via, Winchip). You'll
get a "illegal instruction" and the program won't work.
If lav2yuv dumps core then one possible cause is no dv support was
included. To enable it make sure that libdv is installed on the
system. This will be necessary if you are using a digital camera (or
analog to DV converter such as the Canopus ADVC100) and converting the
dv avi format into the MPEG format.
Start xawtv to see if you get an picture. If you want to use HW-
playback of the recorded streams you have to start xawtv (any TV
application works) once to get the streams played back. You should
also check the settings of your mixer in the sound card.
If you compile the tools on a platform other than Linux not all tools
will work. Mjpegtools on a OS/X system for example will not have V4L
(video4linux) capability.
Never try to stop or start the TV application when lavrec runs. If you
start or stop the TV application lavrec will stop recording, or your
computer could get "frozen". This is a problem of v4l (video4linux).
This problem is solved with v4l2. If you use v4l2 you can record the
video and stop and start the tv application whenever you want. But
v4l2 is currently (7. Jan. 2003) only supported for the zoran based
cards (BUZ, DC10, DC10+, LML33) if you use the CVS driver from
mjpeg.sf.net tagged with ZORAN_VIDEODEV_2. And this driver only works
with the 2.4.20 kernel and the 2.5.* development kernel.
One last thing about the data you get before we start:
Audio: ( Samplerate * Channels * Bitsize ) / (8 * 1024)
CD Quality:(44100 Samples/sec * 2 Chanels * 16 Bit) / (8 * 1024)=172,2 kB/sec
The 8 * 1024 convert the value from bit/sec to kByte/sec
Video: (width * height * framerate * quality ) / (200 * 1024)
PAL HALF Size : (352 * 288 * 25 * 80) / (200 * 1024) = 990 kB/sec
PAL FULL size : (720 * 576 * 25 * 80) / (200 * 1024) = 4050 kB/sec
NTSC HALF size: (352 * 240 * 30 * 80) / (200 * 1024) = 990 kB/sec
NTSC FULL size: (720 * 480 * 30 * 80) / (200 * 1024) = 4050 kB/sec
The 1024 converts the Bytes to kBytes. Not every card can record the
size mentioned. The Buz and Marvel G400 for example can only record a
size of 720x576 when using -d 1, the DC10 records a size of 384x288
when using -d 2.
When you add audio and video datarate this is what your hard disk has
to be able to write constantly streaming, else you will have lost
frames.
If you want to play with the --mjpeg-buffer-size. Remember the value
should be at least big enough that one frame fits in it. The size of
one frame is: (width * height * quality ) / (200 * 1024) = kB If the
buffer is too small the rate calculation doesn't match any more and
buffer overflows can happen. The maximum value is 512kB.
How video works and the difference between the video types is
explained here: http://www.mir.com/DMG/
There you also find how to create MPEG Still Images for VCD/SVCD.
A good description of DV (Digital Video) can be found here:
http://www.uwasa.fi/~f76998/video/conversion/
2.1. Some books we found usefull
written in English:
o Digital Video and HDTV by Charles Poyton (ISBN 1-55860-792-7)
o Digital Video Compression by Peter Symes (ISBN 0-07-142487-3)
o Video Demystified by Keith Jack (ISBN 1-878707-56-6)
written in German:
o Fernsehtechnik von Rudolf Maeusl (ISBN 3-7785-2374-0)
o Professionelle Videotechnik - analoge und digitale Grundlagen von
U. Schmidt (ISBN 3-540-43974-9)
o Digitale Film- und Videotechnik von U. Schmidt (ISBN 3-446-21827-0)
If you know some other really good book about that, write us!
3. Recording videos
3.1. lavrec examples
Recording with lavrec look's like this:
> lavrec -f a -i P -d 2 record.avi
Should start recording now,
-f a
use AVI as output format,
-i P
use as input source the SVHS-In with PAL format,
-d 2
the size of the pictures are half size (352x288)
record.avi
name of the created file.
Recording is finished by pressing Ctrl-C (on German Keyboards: Strg-
C). Sometimes using -f A instead of -f a might be necessary
Other example:
> lavrec -f q -i n -d 1 -q 80 -s -l 80 -R l -U record.avi
Should start recording now,
-f q
use Quicktime as output format,
-i n
use Composite-In with NTSC format,
-d 1
record pictures with full size (640x480)
-q 80
set the quality to 80% of the captured image
-s use stereo mode (default mono)
-l 80
set the recording level to 80% of the max during recording
-R l
set the recording source to Line-In
-U With this lavrec uses the read instead of mmap for recording
this is needed if your sound card does not support the mmap for
recording.
Setting the mixer does not work with every sound card. If you record
with 2 different settings and both recordings are equally loud you
should setup the mixer with a mixer program. After that you should
use the -l -1 option when you record using lavrec
The size of the image depends on the card you use. At full size (-d
1) you get these image sizes: BUZ and LML33: 720x576, the DC10 and
DC30: 768x576
Other example:
> lavrec -w -f a -i S -d 2 -l -1 record%02d.avi
Should start recording,
-w Waits for user confirmation to start (press enter)
-f a
use AVI as output format,
-i S
use SECAM SVHS-Input (SECAM Composite recording is also
possible: -i s)
-d 2
the size of the pictures are half size
-l -1
do not touch the mixer settings
record%02d.avi
Here lavrec creates the first file named record00.avi after the
file has reached a size of 1.6GB (after about 20 Minutes
recording) it starts a new sequence named record01.avi and so on
till the recording is stopped or the disk is full. With the
release of the 1.9.0 Version, the mjpegtools are able to handle
AVI files larger than 2GB. So that option is not needed any more
if you want to record more data that fits into a 2GB file.
Other example:
> lavrec -f a -i t -q 80 -d 2 -C europe-west:SE20 test.avi
Should start recording now,
-f a
use AVI as output format,
-i t
use tuner input,
-q 80
set the quality to 80% of the captured image
-d 2
the size of the pictures are half size (352x288)
-C choose TV channels, and the corresponding -it and -iT (video
source: TV tuner) can currently be used on the Marvel G200/G400
and the Matrox Millenium G200/G400 with Rainbow Runner extension
(BTTV-Support is under construction). For more information on
how to make the TV tuner parts of these cards work, see the
Marvel/Linux project on: http://marvel.sourceforge.net
Last example:
> lavrec -f a -i p -g 352x288 -q 80 -s -l 70 -R l --software-encoding
test03.avi
The two new options are -g 352x288, which sets the size of the video
to be recorded when using --software-encoding, this enables the
software encoding of the recorded images. With this option you can
also record from a bttv based card. The processor load is high. This
option only works for generic video4linux cards (such as the
brooktree-848/878 based cards), it doesn't work for zoran-based cards.
3.2. Other recording hints
All lavtools accept a file description like file*.avi, so you do not
have to name each file, but that would also be a posibillity to do.
Note: More options are described in the man-page, but with this you
should be able to get started.
Here are some hints for sensible settings. Turn the quality to 80% or
more for -d 2 capture. At full resolution as low as 40% seems to be
visually "perfect". -d 2 is already better than VHS video (a *lot*!).
For a Marvel you should not set the quality higher than 50 when you
record at full size (-d 1). If you use higher settings (-q 60) it is
more likely that you will encounter problems. Higher settings will
result in framedrops. If you're aiming to create VCD's then there is
little to be gained recording at full resolution as you need to reduce
to -d 2 resolution later anyway.
you can record at other sizes than the obvious -d 1/2/4. You can use
combinations where you use halve horizontal size and full vertical
size: -d 21. This would record for NTSC at a size of 352x480. This
helps if you want to create SVCDs, scaling the 352 Pixles put to 480
is not that visible for the eye as if you would use the other
combination -d 12. Where you have the full horzontal resolution and
half vertical this Version will have a size of 720x288 for NTSC
3.3. Some information about the typical lavrec output while recording
0.06.14:22 int: 00040 lst:0 ins:0 del:0 ae:0 td1=0.014 td2=0.029
The first part shows the time lavrec is recording. int: the interval
between two frames. lst: the number of lost frames. ins and del: are
the number of frames inserted and deleted for sync correction. ae:
number of audio errors. td1 and td2 are the audio/video time-
difference.
o (int) frame interval should be around 33 (NTSC) or 40 (PAL/SECAM).
If it is very different, you'll likely get a bad recording and/or
many lost frames
o (lst) lost frames are bad and mean that something is not working
very well during recording (too slow HD, too high CPU usage, ...)
Try recording with a greater decimation and possibly a lower
quality.
o (ins, del) inserted OR deleted frames of them are normal -> sync.
If you have many lost AND inserted frames, you're asking too much
of your machine. Use less demanding options or try a different
sound card.
o (ae) audio errors are never good. Should be 0
o (td1, td2) time differenceis always floating around 0, unless sync
correction is disabled (--synchronization!=2, 2 is default).
3.4. Notes about "interlace field order - what can go wrong and how
to fix it"
Firstly, what does it mean for interlace field order to be wrong?
The whole mjpegtools image processing chain is frame-oriented. Since
it is video material that is captured each frame comprised a top field
(the 0th, 2nd, 4th and so lines) and a bottom field (the 1st, 3rd, 5th
and so on lines).
3.4.1. There are three bad things that can happen with fields
1. This is really only an issue for movies in PAL video where each
film frame is sent as a pair of fields. These can be sent top or
bottom field first and sadly it's not always the same, though
bottom-first appears to be usual. If you capture with the wrong
field order (you start capturing each frame with a bottom rather
than a top or vice versa) the frames of the movie get split
*between* frames in the stream. Played back on a TV where each
field is displayed on its own this is harmless. The sequence of
fields played back is exactly the same as the sequence of fields
broadcast. Unfortunately, playing back on a Computer monitor where
both fields of a frame appear at once it looks *terrible* because
each frame is effectively mixing two moments in time 1/25sec
apparent.
2. The two fields can simply be swapped somehow so that top gets
treat as bottom and bottom treat as top. Juddering and "slicing" is
the result. This occasionally seems to happen due to hardware
glitches in the capture card.
3. Somewhere in capturing/processing the *order* in time of the two
fields in each frame can get mislabeled somehow. This is not good
as it means that when playback eventually takes place a field
containing an image sampled earlier in time comes after an image
sampled later. Weird "juddering" effects are the results.
3.4.2. How can I recognize if I have one of these Problems ?
1. This can be hard to spot. If you have mysteriously flickery
pictures during playback try encoding a snippet with the reverse
field-order forced (see below). If things improve drastically you
know what the problem was and what the solution is!
2. The two fields can simply be swapped somehow so that top gets treat
as bottom and bottom treat as top. Juddering and "slicing" is the
result. This occasionally seems to happen due to hardware glitches
in the capture card. That problem lookes like that:
Interlacing problem
3. Somewhere in capturing/processing the *order* in time of the two
fields in each frame can get mislabeled somehow. This is not good
as it means that when playback eventually takes place a field
containing an image sampled earlier in time comes after an image
sampled later. Weird "juddering" effects are the result.
If you use glav or lavplay be sure that you also use the -F/--flicker
option. This disables some things that make the picture look better.
If you want to look at the video you can also use yuvplay:
> lav2yuv | ... | yuvplay
If there is a field order problem you should see it with yuvplay.
3.4.3. How can you fix it?
1. To fix this one the fields need to be "shifted" through the frames.
Use yuvcorrect's -T BOTT_FORWARD/TOP_FORWARD to shift the way
fields are allocated to frames. You can find out the current field
order for an MJPEG file by looking at the first few lines of debug
output from: > lav2yuv -v 2 the_mjpeg_file > /dev/null Or re-record
exchanging -f a for -F A or vice-versa.
2. This isn't too bad either. Use a tool that simply swaps the top and
bottom fields a second time. yuvcorrect can do this use the -T
LINE_SWITCH.
3. Is easy to fix. Either tell a tool someplace to relabel the fields
or simply tell the player to play back in swapped order (the latter
can be done "indirectly" by telling mpeg2enc when encoding to
reverse the flag (-z b|t) that tells the decoder which field order
to use.
In order to determine exactly what type of interlacing problem you
have, you need to extract some frames from the recorded stream and
take a look at them:
> mkdir pnm
> lav2yuv -f 40 video.avi | y4mtoppm | pnmsplit - pnm/image%d.pnm
> rm pnm/image?.pnm
> cd pnm
> xv
First we create a directory where we store the images. The lav2yuv -f
40 writes only the first 40 frames to stdout. The mjpegtools y4mtoppm
converts the frames to pnm images and the pnmsplit splits the picture
into two frames in the picture to two single pictures. Then we remove
the first 10 images because pnmsplit does not support the %0xd
numbering. Without a leading zero in the number, the files will be
sorted in the wrong order, leading to confusing playback.
Use your favorite graphic program (xv for example) to view the
pictures. As each picture only contain one field out of two they will
appear scaled vertically. If you look at the pictures you should see
the movie slowly advancing.
If you have a film you should always see 2 pictures that are nearly
the same (because the film frame is split into two field for
broadcasting) after each other. You can observe this easily if you
have comb effects when you pause the film because both fields will be
displayed at the same time. The two pictures that belong together
should have an even number and the following odd number. So if you
take a look on pictures: 4 and 5 are nearly identical, 5 and 6 differ
(have movement), 6 and 7 identical, 7 and 8 differ , ....
To fix this problem you have to use yuvcorrect's -T BOTT_FORWARD or
TOP_FORWARD. You can also have the problem that the field order
(top/bottom) is still wrong. You may have to use yuvcorrect a second
time with -M LINE_SWITCH, or use the mpeg2enc -z (b|t) option.
To see if you guessed correctly, extract the frames again, reordering
them using yuvcorrect:
> lav2yuv -f 40 video.avi | yuvcorrect -T OPTION | y4mtoppm | pnmsplit
- pnm/image%d.pnm
Where "OPTION" is what you think it will corrects the problem. This
is for material converted from film. Material produced directly for TV
is addressed below.
3.4.4. Hey, what about NTSC movies ?
Movies are broadcast in NTSC using "3:2" pulldown which means that
half the capture frames contain fields from 1 movie frame and half
fields from 2 frames. To undo this effect for efficient MPEG encoding
you need to use yuvkineco.
If you have an interlaced source like a TV camera you have a frame
consists of two fields that are recorded at different points in time
and shown after each other. Spotting the problem here is harder. You
need to find something moving horizontally from the left to the right.
When you extract the fields, the thing should move in small steps from
the left to the right, not one large step forward, small step back,
large forward, small back...... You have to use the same options
mentioned aboth to correct the problem.
Do not expect that the field order is always the same (top- or bottom-
first) It may change between the channels, between the films, and it
may even change within a film. If it changes constant you may have to
encode with the mpeg2enc -I 1 or even -I 2.
You can only have this problems if you record at full size !!!
4. Creating videos from other sources
Here are some hints and descriptions of how to create the videos from
other sources like images and other video types.
You might also be interested in taking a look at the Transcoding of
existing MPEG-2 section.
4.1. Creating videos from images
You can use jpeg2yuv to create a yuv stream from separate JPEG images.
This stream is sent to stdout, so that it can either be saved into a
file, encoded directly to a mpeg video using mpeg2enc or used for
anything else.
Saving an yuv stream can be done like this:
> jpeg2yuv -f 25 -I p -j image%05d.jpg > result.yuv
Creates the file result.yuv containing the yuv video data with 25 FPS.
The -f option is used to set the frame rate. Note that image%05d.jpg
means that the jpeg files are named image00000.jpg, image00001.jpg and
so on. (05 means five digits, 04 means four digits, etc.) The -I p is
needed for specifing the interlacing. You have to check which type you
have. If you don't have interlacing just choose p for progressive
If you want to encode a mpeg video directly from jpeg images without
saving a separate video file type:
> jpeg2yuv -f 25 -I p -j image%05d.jpg | mpeg2enc -o mpegfile.m1v
Does the same as above but saves a mpeg video rather than a yuv video.
See mpeg2enc section for details on how to use mpeg2enc.
You can also use yuvscaler between jpeg2yuv and mpeg2enc. If you want
to create a SVCD from your source images:
> jpeg2yuv -f 25 -I p -j image%05d.jpg | yuvscaler -O SVCD | mpeg2enc
-f 4 -o video.m2v
You can use the -b option to set the number of the image to start
with. The number of images to be processed can be specified with the
-n number. For example, if your first image is image01.jpg rather than
image00.jpg, and you only want 60 images to be processed type:
>jpeg2yuv -b 1 -f 25 -I p -n 60 -j image*.jpg | yuv2lav -o
stream_without_sound.avi
Adding the sound to the stream then:
> lavaddwav stream_without_sound.avi sound.wav stream_with_sound.avi
For ppm input there is the ppmtoy4m util. There is a manpage for
ppmtoy4m that should be consulted for additional information.
So to create a mpeg video try this:
>cat *.ppm | ppmtoy4m -o 75 -n 60 -F 25:1 | mpeg2enc -o output.m1v
Cat's each *.ppm file to ppmtoy4m. There the first 75 frames
(pictures) are ignored and next 60 are encoded by mpeg2enc to
output.m1v. You can run it without the -o and -n option. The -F
options sets the frame rate, default is NTSC (30000:1001), for PAL you
have to use -F 25:1.
Other picture formats can also be used if there is a converter to ppm.
>ls *.tga | xargs -n1 tgatoppm | ppmtoy4m | yuvplay
A list of filenames (ls *.tga) is given to xargs that executes the
tgatoppm with one (-n 1) argument per call, and feeds the output into
ppmtoy4m. This time the video is only shown on the screen. The xargs
is only needed if the converter (tgatoppm), can only operate on a
single image at a time.
If you want to use the ImageMagick 'convert' tool (a Swiss Army Knife)
try:
>convert *.gif ppm:- | ppmtoy4m | yuvplay
That means take all '.jpg' images in directory, convert to PPM format,
and pipe to stdout, then ppmtoy4m processes them ....
4.2. Decoding streams with mplayer
Decoding the streams with mplayer is a nice way of bringing every
video that mplayer can play back to something you can edit or encode
directly to a mpeg video with the mjpegtools. This method has been
tested with mplayer 1.0rc2. And should work with modifications of the
mplayer commandline also with newer and older versions
>mkfifo stream.yuv
>cat stream.yuv | yuv2lav -o mjpeg_wo.avi &
>mplayer -nosound -noframedrop -vo yuv4mpeg anyfile.mpg
>mplayer -vo null -ao pcm:file=anyfile.wav anyfile.mpg
Now you have for example a mjpeg encoded AVI without sound. The sound
will be in anyfile.wav. Now you can choose if you want to add the
sound to the AVI with lavaddwav and edit the file and encode it.
You can also use instead of yuv2lav, mpeg2enc or any other tool from
the mjpeg tools so your command might also look like that:
> cat stream.yuv | yuvdenoise | yuvscaler -O SVCD | mpeg2enc -f 4 -o
video_svcd.m2v
And cat the wav file into mp2enc to encode it to MP2 audio. The -vo
yuv4mpeg option works well with other input types mentioned in the
mplayer documentation.
4.3. Decoding MPEG2 streams with mpeg2dec
You can decode mpeg2 streams with the patched mpeg2dec version which
creates yuv streams. You can pipe that into any other mjpegtools
program. Or you use a mpeg2dec version directly from the libmpeg2
project and use the output mode pgmpipe. With the pgmtoy4m program
from the mjpegtools you can convert that pgm output back to yuv.
If you ask yourself why there is a patched version and pgmtoy4m. The
answer is that the patch for yuv output was sent several times to the
libmpeg2 developers but was never included. Now we have two ways
around that problem. Decoding looks like this:
> mpeg2dec -s -o pgmpipe ANYTS.VOB | pgmtoy4m -i t -a 10:11 -r
30000:1001 | mpeg2enc -f 8 newvideo.m2v
You can decode the audio as described in the Transcoding of existing
MPEG-2 Section.
4.4. Other things to know
If you have Transport Streams from your DVB card, or os Satelite
Receiver you might want to demultiplex or cut them. A nice tool for
that is Project X available from:
http://www.lucike.info/page_projectx.htm
You can process the streams afterwards as you would do with any mpeg
movie or demultiplexed audio video. So the Transcoding of existing
MPEG-2 section of this document will be of interest.
5. Checking if recording was successful
You can use lavplay or glav. IMPORTANT: NEVER try to run xawtv and
lavplay or glav with hardware playback, it will not work. For software
playback it works fine.
>lavplay -p S record.avi
You should see the recorded video and hear the sound. But the decoding
of the video is done by the CPU which will place a heavy load on the
system. The advantage of this method is you don't need xawtv.
The better way:
>lavplay -p H record.avi
The video is decoded and played by the hardware. The system load is
now very low. This will play it back on-screen using the hardware
rather than software decoding.
You might also try:
> lavply -p C record.avi
Which will play it back using the hardware but to the video output of
the card.
> glav record.avi
Does the same as lavplay but you have an nice GUI. The options for
glav and lavplay are nearly the same. Using no option SW playback is
used.
Using hardware playback a signal for the Composite and SVHS OUT is
generated so you can view the movie on your TV.
> lav2yuv test.eli | yuvplay
Is a other way to get the video without sound. You can use yuvplay
once in the encoding command. When you use yuvplay in the encoding
command you see the changes made by filters and scaling. You can also
use it for slow-motion debugging.
NOTE: After loading the driver's you have to start xawtv to set up
some things lavplay and glav do not, but they are needed for HW-
Playback. Don't forget to close xawtv !!
NOTE2: Do not try to send glav an lavplay into background, wont work
correct !!!
NOTE3: SECAM playback is now (12.3.2001) only in monochrome, but the
recording and encoding is done right.
NOTE4:Bad cables may reduce the quality of the image. Normally you
can't see this but when there is text you might notice a small shadow.
When you see this you should change the cable.
Coming soon: There is a tool which makes recoding videos very simple:
Linux Studio. You can download it at: http://ronald.bitfreak.net
6. Edit the video
6.1. Edit with glav
Most tasks can be easily done by glav. Like deleting parts of the
video, cut paste and copy parts of the videos.
glav button description
The modifications should be saved because glav does not destructively
edit the video. This means that the original video is left untouched
and the modifications are kept in an extra "Edit List" file readable
with a text editor. These files can be used as an input to the other
lavtools programs such as lav2wav, lav2yuv, lavtrans.
If you want to cut off the beginning and the end of the stream mark
the beginning and the and, and use the "save select" button. The edit
list file is than used as input for the lavtools. If you want to split
a recorded video to some smaller parts simply select the parts and
then save each part to a different listfile.
You can see all changes to the video and sound NOW and you do not need
to recalculate anything.
If you want to get a "destructive" version of your edited video use:
> lavtrans -o short_version.avi -f a editlist.eli
-o specifies the output name
-f a
specifies the output format (AVI for example)
editlist.eli
is the list file where the modifications are described. You
generate the list file with the "save all" or "save select"
buttons in glav.
6.2. Unify videos
> lavtrans -o stream.qt -f q record_1.avi record_2.avi ...
record_n.avi
-o specifies the outputfile name
-f q
specifies the output format, quicktime in this case
This is usually not needed. Keep in your mind that there is the 2GB
file-size-limit on 32Bit systems with an older glibc. Usually not a
problem these days
6.3. Separate sound
> lavtrans -o sound.wav -f w stream.avi
Creates a wav file with the sound of the stream.avi Maybe needed if
you want to remove noise or if you want to convert it to another sound
format.
Another way to split the sound is:
> lav2wav editlist.eli >sound.wav
6.4. Separate images
>mkdir jpg; lavtrans -o jpg/image%05d.jpg -f i stream.avi
First create the directory "jpg". Then lavtrans will create single JPG
images in the jpg directory from the stream.avi file. The files will
be named: image00000.jpg, image00001.jpg ....
The jpg images created contain the whole picture. But if you have
recorded at full size the images are stored interlaced. Usually the
picture viewers show only the first field in the jpg file.
If you want to have the image in a single file you can use that
version
> lav2yuv -f 1 stream.avi | y4mtoppm -L >file.pnm
If you want to split the fields into single files use that:
> lav2yuv -f 5 ../stream.avi | y4mtoppm | pnmsplit - image%d.pnm
Maybe interesting if you need sample images and do not want to play
around with grabbing a single image.
6.5. Creating movie transitions
Thanks to Philipp Zabel's lavpipe, we can now make simple transitions
between movies or combine multiple layers of movies.
Philipp wrote this HOWTO on how to make transitions:
Let's assume simple this scene: We have two input videos intro.avi and
epilogue.mov and want to make intro.avi transition into epilogue.mov
with a duration of one second (that is 25 frames for PAL or 30 frames
for NTSC).
Intro.avi and epiloque.mov have to be of the same format (the same
frame rate and resolution). In this example they are both 352x288 PAL
files. intro.avi contains 250 frames and epilogue.mov is 1000 frames
long.
Therefore our output file will contain:
the first 225 frames of intro.avi
a 25 frame transition containing the last 25 frames of intro.avi
and the first 25 frames of epilogue.mov
the last 975 frames of epilogue.mov
We could get the last 25 frames of intro.avi by calling:
>lav2yuv -o 225 -f 25 intro.avi
-o 255, the offset, tells lav2yuv to begin with frame # 225 and
-f 25 makes it output 25 frames from there on.
Another possibility is:
> lav2yuv -o -25 intro.avi
Since negative offsets are counted from the end.
And the first 25 frames of epilogue.mov:
> lav2yuv -f 25 epilogue.mov
-o defaults to an offset of zero
But we need to combine the two streams with lavpipe. So the call would
be:
> lavpipe "lav2yuv -o 255 -f 25 intro.avi" "lav2yuv -f 25
epilogue.mov"
The output of this is a raw yuv stream that can be fed into
transist.flt.
transist.flt needs to be informed about the duration of the transition
and the opacity of the second stream at the beginning and at the end
of the transition:
-o num
opacity of second input at the beginning [0-255]
-O num
opacity of second input at the end [0-255]