Tumgik
#vseeface
holosynth · 6 months
Text
give it up for vtuber model raffle winner #1, @therenobee ! an incredibly lucky roll as a long-time enjoyer of their work. based off @voidtaffy 's take on the design - an absolute treat to model!!!
song: xenia - graham kartna youtube version: https://www.youtube.com/watch?v=DFBBwXAWQ4I
still versions under the cut:
Tumblr media Tumblr media
112 notes · View notes
foxern · 25 days
Text
Ever get an idea that sticks in your head and wont leave? Yeah.
A couple months of work. Hundreds of new things learned in Blender and unity. A lot of headaches. All lead to this.
If anyone knows how to make the jaw bone open more, I would love the help since nothing I've tried works.
23 notes · View notes
ayanathedork · 4 months
Text
Tumblr media
this was just for funsies to learn unity (which, pretty much failed i still cant use it) but it's cool to know i can make stuff for vsf :3
47 notes · View notes
ompuco · 10 months
Text
Hypnospace Outlaw Vtuber effect I made a couple weeks ago, since a good friend & fellow Vtuber was streaming it for the first time at the time!
74 notes · View notes
bird-chii · 10 months
Text
Tumblr media
3D commissions are open for anyone wondering. Send your reference sheet through the link above and I'll email you back a quote for how much it will be based on your reference.
my art tumblr is @birdchiiart
36 notes · View notes
toa-kohutti · 1 year
Text
LOOK WHAT I LEARNED HOW TO DO (commissions are now open for any interested future Bionicle VTubers, starting at $75! Message me or drop an ask in my inbox for more info!)
62 notes · View notes
brandrive-art · 1 year
Text
Tumblr media Tumblr media
Evangelyne test on VSeeFace using 2 differents materials.
134 notes · View notes
kianga-eu · 2 months
Text
VRChat & VSeeFace avatar commission for Huxley, featuring his surfer collie Laelia with two different outfits. I loved working with this design, hope I could do her justice! Thank you so much for the commission! 💚
8 notes · View notes
slayerkid · 1 month
Text
You can use your phone camera on VSeeFace!
For anyone interested in VTubing but can't afford a webcam at the moment, or just simply someone looking to use their iPhone to have improved face tracking for their 3D model, I made a simple tutorial for you! This method requires you to use the app version of VTube Studio. The tutorial is short, simple to follow and everything in this video is FREE to use! As long as you own a phone. Click here to check out the video!
9 notes · View notes
canmom · 4 months
Text
「viRtua canm0m」 Project :: 002 - driving a vtuber
That about wraps up my series on the technical details on uploading my brain. Get a good clean scan and you won't need to do much work. As for the rest, well, you know, everyone's been talking about uploads since the MMAcevedo experiment, but honestly so much is still a black box right now it's hard to say anything definitive. Nobody wants to hear more upload qualia discourse, do they?
On the other hand, vtubing is a lot easier to get to grips with! And more importantly, actually real. So let's talk details!
Vtubing is, at the most abstract level, a kind of puppetry using video tracking software and livestreaming. Alternatively, you could compare it to realtime mocap animation. Someone at Polygon did a surprisingly decent overview of the scene if you're unfamiliar.
Generally speaking: you need a model, and you need tracking of some sort, and a program that takes the tracking data and applies it to a skeleton to render a skinned mesh in real time.
Remarkably, there are a lot of quite high-quality vtubing tools available as open source. And I'm lucky enough to know a vtuber who is very generous in pointing me in the right direction (shoutout to Yuri Heart, she's about to embark on something very special for her end of year streams so I highly encourage you to tune in tonight!).
For anime-style vtubing, there are two main types, termed '2D' and 3D'. 2D vtubing involves taking a static illustration and cutting it up to pieces which can be animated through warping and replacement - the results can look pretty '3D', but they're not using 3D graphics techniques, it's closer to the kind of cutout animation used in gacha games. The main tool used is Live2D, which is proprietary with a limited free version. Other alternatives with free/paid models include PrPrLive and VTube studio. FaceRig (no longer available) and Animaze (proprietary) also support Live2D models. I have a very cute 2D vtuber avatar created by @xrafstar for use in PrPrLive, and I definitely want to include some aspects of her design in the new 3D character I'm working on.
Tumblr media
For 3D anime-style vtubing, the most commonly used software is probably VSeeFace, which is built on Unity and renders the VRM format. VRM is an open standard that extends the GLTF file format for 3D models, adding support for a cel shading material and defining a specific skeleton format.
It's incredibly easy to get a pretty decent looking VRM model using the software VRoid Studio, essentially a videogame character creator whose anime-styled models can be customised using lots of sliders, hair pieces, etc., which appears to be owned by Pixiv. The program includes basic texture-painting tools, and the facility to load in new models, but ultimately the way to go for a more custom model is to use the VRM import/export plugin in Blender.
But first, let's have a look at the software which will display our model.
Tumblr media
meet viRtua canm0m v0.0.5, a very basic design. her clothes don't match very well at all.
VSeeFace offers a decent set of parameters and honestly got quite nice tracking out of the box. You can also receive face tracking data from the ARKit protocol from a connected iPhone, get hand tracking data from a Leap Motion, or disable its internal tracking and pipe in another application using the VMC protocol.
If you want more control, another Unity-based program called VNyan offers more fine-grained adjustment, as well as a kind of node-graph based programming system for doing things like spawning physics objects or modifying the model when triggered by Twitch etc. They've also implemented experimental hand tracking for webcams, although it doesn't work very well so far. This pointing shot took forever to get:
Tumblr media
<kayfabe>Obviously I'll be hooking it up to use the output of the simulated brain upload rather than a webcam.</kayfabe>
To get good hand tracking you basically need some kit - most likely a Leap Motion (1 or 2), which costs about £120 new. It's essentially a small pair of IR cameras designed to measure depth, which can be placed on a necklace, on your desk or on your monitor. I assume from there they use some kind of neural network to estimate your hand positions. I got to have a go on one of these recently and the tracking was generally very clean - better than what the Quest 2/3 can do. So I'm planning to get one of those, more on that when I have one.
Essentially, the tracker feeds a bunch of floating point numbers in to the display software at every tick, and the display software is responsible for blending all these different influences and applying it to the skinned mesh. For example, a parameter might be something like eyeLookInLeft. VNyan uses the Apple ARKit parameters internally, and you can see the full list of ARKit blendshapes here.
To apply tracking data, the software needs a model whose rig it can understand. This is defined in the VRM spec, which tells you exactly which bones must be present in the rig and how they should be oriented in a T-pose. The skeleton is generally speaking pretty simple: you have shoulder bones but no roll bones in the arm; individual finger joint bones; 2-3 chest bones; no separate toes; 5 head bones (including neck). Except for the hands, it's on the low end of game rig complexity.
Expressions are handled using GLTF morph targets, also known as blend shapes or (in Blender) shape keys. Each one essentially a set of displacement values for the mesh vertices. The spec defines five default expressions (happy, angry, sad, relaxed, surprised), five vowel mouth shapes for lip sync, blinks, and shapes for pointing the eyes in different directions (if you wanna do it this way rather than with bones). You can also define custom expressions.
Tumblr media
This viRtua canm0m's teeth are clipping through her jaw...
By default, the face-tracking generally tries to estimate whether you qualify as meeting one of these expressions. For example, if I open my mouth wide it triggers the 'surprised' expression where the character opens her mouth super wide and her pupils get tiny.
You can calibrate the expressions that trigger this effect in VSeeFace by pulling funny faces at the computer to demonstrate each expression (it's kinda black-box); in VNyan, you can set it to trigger the expressions based on certain combinations of ARKit inputs.
For more complex expressions in VNyan, you need to sculpt blendshapes for the various ARKit blendshapes. These are not generated by default in VRoid Studio so that will be a bit of work.
You can apply various kinds of post-processing to the tracking data, e.g. adjusting blending weights based on input values or applying moving-average smoothing (though this noticeably increases the lag between your movements and the model), restricting the model's range of movement in various ways, applying IK to plant the feet, and similar.
On top of the skeleton bones, you can add any number of 'spring bones' which are given a physics simulation. These are used to, for example, have hair swing naturally when you move, or, yes, make your boobs jiggle. Spring bones give you a natural overshoot and settle, and they're going to be quite important to creating a model that feels alive, I think.
Next up we are gonna crack open the VRoid Studio model in Blender and look into its topology, weight painting, and shaders. GLTF defines standard PBR metallicity-roughness-normals shaders in its spec, but leaves the actual shader up to the application. VRM adds a custom toon shader, which blends between two colour maps based on the Lambertian shading, and this is going to be quite interesting to take apart.
Tumblr media
The MToon shader is pretty solid, but ultimately I think I want to create custom shaders for my character. Shaders are something I specialise in at work, and I think it would be a great way to give her more of a unique identity. This will mean going beyond the VRM format, and I'll be looking into using the VNyan SDK to build on top of that.
More soon, watch this space!
9 notes · View notes
holosynth · 6 months
Text
here's vtuber model raffle winner #2, @buffbears !
!! i really enjoyed this character design and i'm so glad i got the chance to bring it into 3d :-) the foldable wings are definitely one of my favorite toggles i've done so far ahah
more info + stills under the cut!
Tumblr media
youtube version: https://www.youtube.com/watch?v=DJJFu1fZKEc
song: pakederm punch - floor baba
15 notes · View notes
ompuco · 1 year
Text
the 60kbps gamer
(working on effects for viewer redeems, all the compression here is faked/emulated & isolated to just me)
69 notes · View notes
felineentity · 1 year
Video
A Smol of Marsh! Another Cat! This one has a bunch of fun toggles, and it’s to this day one of my favourite Smols I made.
27 notes · View notes
karlamon · 2 months
Text
When you bring a Taidum into VSeeFace:
Song: "Windows XP Tour 3" by Bill Brown
5 notes · View notes
fluffyphocks · 3 months
Text
Free to use VTuber Model
Hi there! I've been obsessed with making VTuber models lately and decided to make one for general public use!
This is a free to use (donations are appreciated, but not required) 3d model of a dragon girl. The model is of the bust only and designed to only be used in the program VSeeFace. It comes with a few basic emotes which can be setup in the program (check youtube out for some nice guides).
Model is free to use for streaming and general fun, only ask that this isn't used for commercial stuff and that you give me credit if anyone asks where you found this!
2 notes · View notes