[Author Prev][Author Next][Thread Prev][Thread Next][Author Index][Thread Index]

Re: [tor-talk] So I Guess Tor Could Never Be Secure No Matter What



In my last message I alluded to what made me apply my thought to the hardware of the internet provider but
included a bit extra instead of modifying the material.

I have decided to clarify the material that I did submit though.
Being not necessarily in line with the porting being the hardware.

--------------





To attempt clarify:

What causes something to look real?

form+color+more?



I took a dds files and took colors from images of real life objects

and applied them to the image in the dds file. 

Replacing similarly toned pixels in the cg image with colors from the real life image.  



It took about 20 minutes to do the ares I accomplished. After which the areas looked more 

life like. I didn't change the model or the resolution of the image. 

However I did increase the resolution later on with having the result of
 a cleaning character at higher and lower game quality settings. The 
change in resolution seemed to have no noticeable affect on the 
performance of my system.



What I decided from this there should be a means by which to make 
imagery character/scenes/other that are fully representative to life. 



One example to which I noticed, as well as others may have is cg eyes. 
They don't seem to be real in cg characters. I was able to 
satisfactorily created life like eyes from pulling colors directly of 
the eyes of a high res picture of an actress.

When I extracted the colors at the pixel level I noticed that the whites
 in the eye were not what but a shade of the color of the iris. 





So what I'm stating is that if the colors of life were sampled and those colors accurately applied 

to a character/object or other the resulting image (3d/2d) would 
represent the coloration of which we call life. From examining the color
 pallet from a living portrait I believe the colors variation and base 
coloration do not exist as are applied to cg in life. 





A low res picture of a person 512x512 pixels still looks like a person becasue of the colors

used in each pixel. If you up the res to 1024x1024 the person might look clearer but

the person is still the same amount of the picture.

5x5in at 512x512 and 5x5in at 1080x1080 person in both pictures is 3x2 in.

You most likely will be able in both to easily tell it is a person and it is the same person

in both pictures.





Pulling colors textures from the environment like autodesk which may or may not do it well

could be a way to get the look it seems games keep striving for.

(I took a picture of real steps and replaced the steps image and the steps that were

put in looked real.





This began me wondering all we see on the screen is a set number of pixels

1080p or that of your screen. A nice animated character in a movie such 
as The Teenage Mutant Ninja Turtles is really only the number of pixels 
on your screen as the max it can support. Regardless of the resolution 
that the animation studio created the model in. That could imply I could
 be able to remove frames a movie and create for myself a model of that 
character.

After which I found the software autodesk  which may be able to perform that task.







-

With finding all of this it makes my question with those in the field 
who deal with such all the time why haven't they been able to discover 
that and apply it. 

It would seem when a magazine does the changes of cg to a model they can do it so that real life is 

added to with real looking cg.



With forth with the texture level that any system can handle if the 
colors were altered to life pallet would that not make the cg object 
look real and possibly better as a whole?



One example being I was working with an image of Laura Croft from the new game.

If the color pallet were altered to solely life found colors how much 
more real would the character instantly look? Forgoing that shaping of 
the characters figure would be needed as well.

Which may be easily accomplished by applying pixel distribution of a living person to the model.

Or combinations there of. 







Other random thought. 3d currently can be produced 3d inside of 3d on a screen.

A door you can look through in a game looks as though you are looking 
through a door in real life being retaliative to the view of the 
character.



This being solely represented in pixels which are placed in a 2 
dimensional row. Is it not that 3d viewing should be producable by means
 that allow viewing of 3d in the dame manner of the person looking at 
the screen?

 rows that are 3d why is it that 3d coming out or into the screen is not
 being down to?  It seems to be doable in another environment. It should
 be so that we could look

at our screen and see that we were looking down a tunnel. Or that the 
entirety of the game is outside of the screen. One example be a portal 
in a portal in the game Portals.

 

To clarify more if life accurate images are the goal, That has already been accomplished by

any picture displayed digitally.

And it could go that if that accuracy is the goal the only source needed with others being possible

could be a picture of life ,as well if need be multiple others, to 
extract the symetry of life from a source of pixels. Those provided the 
exact shape formation and more of the object wanting to be produced.



As for the way zooming in or out of an image seemed to affect the quality, 

I submitted that for those who may have a better grasp or means of which
 I don't to test to see if that actually allows a higher quality to be 
produced in a smaller format than by other means. 

As well as a possibly faster way to shrink an image than the current means of processing the image down to a smaller size.  





Lastly noted: the portion showing the hardware was a side note instead of making a second

post comparing the performance of multiple mobile gpu's to a single desktop gpu.



The figures seem to show that possibly 2 mobile gpu's on 1 desktop card 
would use less energy produce less heat and provide more computational 
power than a single desktop gpu.

When then made me wonder that if 2 desktop gpu's can fit onto one desktop card. 

What would it be if 4 mobile gou's were put onto one card?

 		 	   		  
-- 
tor-talk mailing list - tor-talk@xxxxxxxxxxxxxxxxxxxx
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk