loulou ange
11-01-2013, بتوقيت غرينيتش 07:48 PM
السلام عليكم
كيفية انشاء عناكب لجلب url مثل غوغل
بحثت عنها كثيرا والحمد الله وجدها بلغة #
اولا نحمل المكتبة
http://www.chilkatsoft.com/download/ChilkatDotNet2.msi
وهذا الكود code اللازم
رمز Code:
. Chilkat.Spider spider = new Chilkat.Spider(); Chilkat.StringArray seenDomains = new Chilkat.StringArray(); Chilkat.StringArray seedUrls = new Chilkat.StringArray(); seenDomains.Unique = true; seedUrls.Unique = true; seedUrls.Append("http://directory.google.com/Top/Recreation/Outdoors/Hiking/Backpacking/"); // Set our outbound URL exclude patterns spider.AddAvoidOutboundLinkPattern("*?id=*"); spider.AddAvoidOutboundLinkPattern("*.mypages.*"); spider.AddAvoidOutboundLinkPattern("*.personal.*") ; spider.AddAvoidOutboundLinkPattern("*.comcast.*"); spider.AddAvoidOutboundLinkPattern("*.aol.*"); spider.AddAvoidOutboundLinkPattern("*~*"); // Use a cache so we don't have to re-fetch URLs previously fetched. spider.CacheDir = "c:/spiderCache/"; spider.FetchFromCache = true; spider.UpdateCache = true; while (seedUrls.Count > 0) { string url; url = seedUrls.Pop(); spider.Initialize(url); // Spider 5 URLs of this domain. // but first, save the base domain in seenDomains string domain; domain = spider.GetDomain(url); seenDomains.Append(spider.GetBaseDomain(domain)); int i; bool success; for (i = 0; i <= 4; i++) { success = spider.CrawlNext(); if (success != true) { break; } // Display the URL we just crawled. textBox1.Text += spider.LastUrl + "\r\n"; // If the last URL was retrieved from cache, // we won't wait. Otherwise we'll wait 1 second // before fetching the next URL. if (spider.LastFromCache != true) { spider.SleepMs(1000); } } // Add the outbound links to seedUrls, except // for the domains we've already seen. for (i = 0; i <= spider.NumOutboundLinks - 1; i++) { url = spider.GetOutboundLink(i); domain = spider.GetDomain(url); string baseDomain; baseDomain = spider.GetBaseDomain(domain); if (!seenDomains.Contains(baseDomain)) { seedUrls.Append(url); } // Don't let our list of seedUrls grow too large. if (seedUrls.Count > 1000) { break; } } }
الرابط الاصلي للموضوع
http://dev-sy.com/vb/showthread.php?t=415
واي استفسار انا موجود
https://fbcdn-sphotos-d-a.akamaihd.net/hphotos-ak-ash4/482113_236967293114455_1193518507_n.png (http://www.dzbatna.com)
©المشاركات المنشورة تعبر عن وجهة نظر صاحبها فقط، ولا تُعبّر بأي شكل من الأشكال عن وجهة نظر إدارة المنتدى (http://www.dzbatna.com)©
كيفية انشاء عناكب لجلب url مثل غوغل
بحثت عنها كثيرا والحمد الله وجدها بلغة #
اولا نحمل المكتبة
http://www.chilkatsoft.com/download/ChilkatDotNet2.msi
وهذا الكود code اللازم
رمز Code:
. Chilkat.Spider spider = new Chilkat.Spider(); Chilkat.StringArray seenDomains = new Chilkat.StringArray(); Chilkat.StringArray seedUrls = new Chilkat.StringArray(); seenDomains.Unique = true; seedUrls.Unique = true; seedUrls.Append("http://directory.google.com/Top/Recreation/Outdoors/Hiking/Backpacking/"); // Set our outbound URL exclude patterns spider.AddAvoidOutboundLinkPattern("*?id=*"); spider.AddAvoidOutboundLinkPattern("*.mypages.*"); spider.AddAvoidOutboundLinkPattern("*.personal.*") ; spider.AddAvoidOutboundLinkPattern("*.comcast.*"); spider.AddAvoidOutboundLinkPattern("*.aol.*"); spider.AddAvoidOutboundLinkPattern("*~*"); // Use a cache so we don't have to re-fetch URLs previously fetched. spider.CacheDir = "c:/spiderCache/"; spider.FetchFromCache = true; spider.UpdateCache = true; while (seedUrls.Count > 0) { string url; url = seedUrls.Pop(); spider.Initialize(url); // Spider 5 URLs of this domain. // but first, save the base domain in seenDomains string domain; domain = spider.GetDomain(url); seenDomains.Append(spider.GetBaseDomain(domain)); int i; bool success; for (i = 0; i <= 4; i++) { success = spider.CrawlNext(); if (success != true) { break; } // Display the URL we just crawled. textBox1.Text += spider.LastUrl + "\r\n"; // If the last URL was retrieved from cache, // we won't wait. Otherwise we'll wait 1 second // before fetching the next URL. if (spider.LastFromCache != true) { spider.SleepMs(1000); } } // Add the outbound links to seedUrls, except // for the domains we've already seen. for (i = 0; i <= spider.NumOutboundLinks - 1; i++) { url = spider.GetOutboundLink(i); domain = spider.GetDomain(url); string baseDomain; baseDomain = spider.GetBaseDomain(domain); if (!seenDomains.Contains(baseDomain)) { seedUrls.Append(url); } // Don't let our list of seedUrls grow too large. if (seedUrls.Count > 1000) { break; } } }
الرابط الاصلي للموضوع
http://dev-sy.com/vb/showthread.php?t=415
واي استفسار انا موجود
https://fbcdn-sphotos-d-a.akamaihd.net/hphotos-ak-ash4/482113_236967293114455_1193518507_n.png (http://www.dzbatna.com)
©المشاركات المنشورة تعبر عن وجهة نظر صاحبها فقط، ولا تُعبّر بأي شكل من الأشكال عن وجهة نظر إدارة المنتدى (http://www.dzbatna.com)©